guillefix 3rd April 2016 at 3:59pm

https://clara.io/

Autodesk Maya

3D printing innovation

guillefix 26th April 2016 at 6:47pm

Smartphone-powered 3D printer: http://www.olo3d.net/

Autodesk Project Escher

9 hallmarks of aging.jpg

A priori probability estimates from structural complexity

guillefix 18th May 2016 at 12:19pm

See draft paper.

See MMathPhys oral presentation

Coding theorem connects probability and complexity

AIT differs fundamentally from Shannon information theory because the latter is fundamentally a theory about distributions, whereas the former is a theory about the information content of individual objects (see Descriptional complexity).

If one assumes that the probability of generating a binary input string of length l issimply 2l2^l (which is true for prefix codes, see Appendix A) then the most likely way to obtain output xx by random sampling of inputs is with the shortest program that generates it, a string of length K(x)K(x).

Direct application of these results fromAIT to many practical systems in science or engineering suf-fers from a number of well known problems:

  • K(x)K(x) is formally uncomputable (due to halting problem).
  • Most results in AIT only hold asymptotically, up to O(1)O(1).
  • Many of the input-output maps in science and engineering are not UTMs.

The way these problems are tackled is described in the next sections.

Coding theorem for computable functions. We begin with a weaker form of the coding theorem, applicable to real world (computable) functions

P(x)2K(xf,n)+O(1)P(x)\leq 2^{-K(x|f,n)+O(1)}

Eq. (2)

where K(xf,n)K(x|f,n) is the complexity of output xx, given the map ff and the value nn which is

(see M. Li and P.M.B. Vitanyi. An introduction to Kolmogorov complexity and its applications. Springer-Verlag New York Inc, 2008., and Lecture notes on descriptional complexity and randomness).

We provide a derivation of equation (2) in Appendix B, using standard results from AIT such as: The complexity of a whole set is often much less than the complexity of individual members of the set (9). Informally , K(xf,n)K(x|f, n) can be viewed as the length of computer code required to specify xx, given the function ff and value nn are already pre-programmed in to the computer. Note that equation (2) is just an upper bound. In contrast to the the full coding theorem of equation (1), there is no lower bound.

We mainly consider maps that comply witha few simple restrictions:

  • # inputs \gg # outputs.
  • # outpus 1\gg 1, to avoid finite size effects.
  • Descriptional complexity of the map ff itself doesn't grow with nn. Basically the map is represented by fixed rule-set for computing outputs from inputs, that is largely independently of n.
  • Due to how the algorithmic complexity is approximated in practice, the map needs to be 'well-behaved' in the sense of not producing many outputs like the digit of π\pi which are algorithmically simple, but which have large entropy values and thus large values of K~x)\tilde{K}x) (the computed approximation of K(x)K(x)). This is a pratical problem, that depends on the complexity measure we use. The "complexity estimator" they used is described in Appendix F.

If instead the map is allowed to contain arbitrary amounts of information, then the map could assign arbitrary probabilties to the outputs, and any coding theorem-like behaviour would be lost. We discuss this fixed complexity condition further in Appendix C.

as we show in Appendix E , it turns out that a reasonably broad range of complexities will follow under quite general conditions for fixed complexity maps.

They use the above conditions to argue that K(xf,n)K(x)K(x|f, n) \approx K(x).

Approximately computing K(x)K(x). K(x)K(x) although uncomputable has been approximated using standard compression algorithms. K~(x)\tilde{K}(x) is used to denote some real-world (computable) approximation to K(x)K(x).

Importance of O(1)O(1) terms. Experimental results on apply the coding theorem to short strings suggest that the O(1)O(1) terms are not very important.

Central ansatz and simplicity bias:

P(x)2aK~x)bP(x)\lesssim 2^{-a\tilde{K}x)-b}

where the constants a>0a>0, and bb depend on the mapping, but not on xx.

We call this upper bound of the probability simplicity bias: High probability outputs must be 'simple', complex outputs must have exponentially lower probabilities. In contrast to the full coding theorem, the lack of a lower bound means that simple outputs may also have low probabilities.

They off estimates for aa and bb in appendix D. In Appendix B, they also argue that: the upper bound of equation (3) should be tight for most inputs, but weak for many outputs.

Examples of simplicity bias in maps

See them!

Discrete RNA sequence to structure mapping

Coarse-grained ordinary differential equation

Coarse-grained stochastic partial differential equation. Black Scholes equation

Polynomial curves

Random matrix – bias but not simplicity bias

L-systems

Random walk map

Logistic map (see Nonlinear maps)

Predicting which of two outputs has higher probability


Connection to Chomsky hierarchy (see Formal language), Sloppy systems..


Appendix A: AIT

Appendix B: Upper and lower bound for computable maps

Upper bound: following derivation using Shannon-Fano code as in InfoTheory book.

Lower bound: Not sure. Ask!, or read! Page 12.

Also many outputs must have probability below their upper bounds.

Appendix C: Fixed complexity map

Appendix D: Making predictions for P(x) in computable maps

Appendix E: Estimating the range of K(xf,n)K(x|f,n).

Arguments based on bounding complexity given the description: map + index of output. This gives upper bounds to min and max complexities to be 00, and log2N0\log_2{N_0} (everything up to additive constant, O(1)O(1)).

For the max, we also need a lower bound, and this is given by the well known fact in AIT that if one has N0N_0 different strings, not all of them can have complexity lower than log2N0\log_2{N_0}, as there are not enough such descriptions. In fact, most of the strings need to have a complexity of log2N0\log_2{N_0}.

Appendix F: Approximations to K(x)K(x).

Appendix G: Simplicity bias and system size

Appendix H: On the intuitive connection of probability and complexity

Appendix I: Simplicity bias in the logbinomial distribution

Appendix J: Predicting the number of outputs, N0N_0. By fitting α\alpha and estimating max(K)max(K) from the known details of the system.

Appendix K: Further examples and figures

Continuous systems are sampled and discretized to create the output.

L-systems, Circadian rhythm

Cell cycle

Feed forward network. Sample networks, measure complexity of given network. by entropy of the distribution of outputs.

Logic gate

Appendix L: Histograms of complexity

Abiogenesis

guillefix 21st June 2016 at 3:13pm

Abiogenesis or biopoiesis or OoL (Origins of Life), is the natural process of life arising from non-living matter, such as simple organic compounds. Self-organization is expected to play a major role, both in the origin of life and in its subsequent Evolution.

A New Physics Theory of Life

Work of M. Eigen

See book by Prigogine, etc.

Artificial chemistry

  • replicator-based "genetics-first" approach
  • "metabolism-first" apprach

See references at the end of page 54 in here

On Nature’s Strategy for Assigning Genetic Code Multiplicity

Origin and evolution of the genetic code: the universal enigma

Self-Organisation and Evolution of Biological and Social Systems

‘RNA world’ inches closer to explaining origins of life

Kauffman talk. Ideas: As evolution progresses it creates new opportunities and richer context for evolution to find evolve further. Function can be defined as that subset of causal effects that contribute to causing a particular goal. In biology that goal is survival. The appropriate language in evolution goes beyond cause and effect, and includes enabling. Organisms are Kantian wholes, where the parts exist for and by means of the whole. The rest of the ideas seem to basically say that biology and evolution is too complex to (fully) describe with mathematical laws. Maybe we can understand the adjacent possible, look at convergence evolution....

Abstract algebra

guillefix 28th May 2016 at 11:09pm

Abstraction

guillefix 8th July 2016 at 2:37am

The Cognitive process by which a Concept is produced.

Achlioptas process

guillefix 13th June 2016 at 8:00pm

An Achlioptas process is a type of Explosive percolation, also known as mm-edge processes, that involve choosing mm candidate edges uniformly at random between any pair of nodes (compare with other Spanning cluster-avoiding process)and applying a rule to select which one is actually chosen. These have been proven to be continuous in the thermodynamic limit, for a fixed mm. They are generalizations of Erdos-Renyi Random graphs.

The first proposed type, were proposed (in Explosive Percolation in Random Networks) and thought to maybe show a discontinuous phase transition.

Achlioptas processes phase transitions are continuous

It has now been show that the Percolation phase transition for Achlioptas processes (and in fact a more general class of k-vertex rule percolation process) is continuous (in the thermodynamic limit), but very steep (see Explosive Percolation Transition is Actually Continuous and Achlioptas process phase transitions are continuous). One can prove the continuity by looking at the asymptotic effect of removing a single link, as the total size goes to infinity. However, Oliver Riordan and Lutz Warnke proved it by proving, in essence, that the number of subcritical components that join together to form the emergent macroscopic-sized component is not sub-extensive in system size. In the words of Friedman and Landsberg, Achlioptas Processes do not lead to the build-up of a “powder keg" (which is a type of cluster configuration that does lead to discontinuous transitions).

However, the model can be generalized to one that shows genuinely discontinuous transitions (see Anomalous critical and supercritical phenomena in explosive percolation). One way to achieve discontinuity is actually to allow the number of edges in the rule, mm to scale up with NN, the network size, in a certain way.

22-edge Achli Achlioptas process is the simplest type: Start with N isolated nodes and add undirected, unweighted edges one at a time. This is done by choosing, at each step, two possible edges uniformly (and independently) at random from the set of N(N1)/2N(N-1)/2 possible {edges between a pair of distinct nodes}. One adds only one of these edges, making a choice based on a systematic rule that affects the speed of development of a GCC.

mm-edge rules are defined similarly.

Selection rules

Product rule. One choice that yields "explosive" percolation is to use the so-called "product rule", in which one always retains the edge that minimizes the product of the sizes of the two components that it merges (with an arbitrary choice when there is a tie).

Sum rule. The size of the new component formed is minimized.

Bohman–Frieze (BF) rule. edge 1 is chosen if it joins two isolated vertices, and edge 2 otherwise.

A selection rule can be classified as a bounded-size or an unbounded-size rule. In a bounded-size selection rule, decisions depend only on the sizes of the components and, moreover, all sizes greater than some (rule-specific) constant KK are treated identically.

There are also the more general mm-edge rules based on chosing mm edges at each step, and selecting one (or potentially more).

See more rules here: Explosive percolation: Unusual transitions of a simple model

The Evolution of Random Graphs (product rule first suggested here).

Avoiding a giant component. Bounded-size rules are able to switch the percolation threshold.

Birth control for giants. The percolation transition is strongly conjectured to be continuous for all bounded-size rules

Product rule wins a competitive game

Hamiltonicity thresholds in Achlioptas processes

Acoustics

guillefix 7th February 2016 at 12:34am

Active colloid

guillefix 17th June 2016 at 5:41pm

Active matter

guillefix 13th July 2016 at 3:51pm

Active matter refers to a type of bulk matter, often soft condensed matter that is an Active system, i.e. that it produces its own driving energy (for example, self-propelling particles, and micro-swimmers).

Driven matter is a closely related type of matter, where the system is externally driven.

The Hydrodynamics of Active Systems

Life at low Reynold's numbers, see Low Reynolds number

See also Complex fluid dynamics, Colloid physics

Single swimmer hydrodynamics: background

Swimming at low Reynolds number: Stokes equation

Important consequence for swimmers: Scallop theorem (see Kinematic reversibility in fluid dynamics)

Swimmer models

Far-flow fields

Used to calculate hydrodynamic interactions, for instance

Point-force problem for Stokes equation, can be solved using its Green function, often called the Oseen tensor (see here):

Gij(r)=18πμ(δijr+rirjr3)G_{ij}(\vec{r}) = \frac{1}{8\pi \mu} \left(\frac{\delta_{ij}}{|\vec{r}|}+\frac{r_i r_j}{|\vec{r}|^3}\right)

where the 18πμ\frac{1}{8\pi \mu} is often omitted in the definition of the Oseen tensor.

Using the Green function to construct general solution, one can construct a multipole expansion. As swimmers (in average, in steady state) don't accelerate, the fluid isn't exerting a net force on them, so they can't be exerting a net force on the fluid (Netwon's third law). Therefore the monopole term (called the Stokelet) isn't present. An exception for this is the relatively large microorganism Volvox, for which gravity force is significant, giving a net force to the problem and creating a Stokelet flow. Therefore, the dominant term is generally the dipolar term:

vi(r)Gijxk(r)Djkv_i (\vec{r}) \approx \frac{\partial G_{ij}}{\partial x_k} (\vec{r}) D_{jk}

where vi(r)v_i (\vec{r}) is the velocity field, and

Djk=fjξkdξD_{jk} = - \int f_j \xi_k d\vec{\xi}

It is conventional to let:

DjkDjk13DiiδjkSjk+TjkD_{jk} \rightarrow D_{jk} -\frac{1}{3}D_{ii} \delta_{jk} \equiv S_{jk} + T_{jk}

where the addition of 13Dii\frac{1}{3}D_{ii} doesn't change the velocity field vi(r)v_i (\vec{r}) because G=0\vec{\nabla} \cdot \mathbf{G} = \vec{0}. SjkS_{jk} is called the stresslet, and TjkT_{jk} is called the rotlet. The rotlet is zero if the net torque on the fluid is zero, which it is for active microswimmers.

Note that the dipolar flow has nematic symmetry; this is important in the collective behavior of active swimmers.

We can have two kinds of dipolar flow around a swimmer:

  • pusher, or extensive . Like that of the E. Coli
  • puller, or contractile Like that of the Chlamydomonas. See figure of chlamydomonas flow

chlamydomonas flow source

Single microswimmer hydrodynamics: applications

  • bacteria enhance diffusion as a result of the flow fields they produce
  • motion of swimmers in background/external flow.
  • interactions with surfaces.

Collective hydrodynamics of active entities

  • Beris-Edwards equations
  • extra stress from active particles, equals the average value of the stresslet, and gives the active stress ζQjk-\zeta Q_{jk}
  • Different kinds of instabilities and patterns arise.

Collective hydrodynamics of active entities: applications

  • active turbulence
  • interactions between topological defects, walls (regions of high bend perturbation), and flows (jets, and vortical).
  • Lyotropic active nematics and active anchoring
  • Example system: microtubules and Molecular motors.

Other applications

  • More general Active systems and types of active matter: dry systems, systems with polar symmetry, density variations, inertia.
  • Active machines and Self-assembly
  • Microswimmers moving in a viscoelastic medium. Living liquid crystals represent a novel system where bacteria swim in a nematic liquid.
  • Biological systems (see Biological matter):
    • molecular motors walking along microtubules contribute to cell division resulting from spindle mitosis.
    • cytoplasmic streaming, flow driven by the motion of motors along the cell walls, presumably to aid the transport of nutrients around the cell.
    • The extent to which hydrodynamics (even at nanometre scales) affects motor motion [74], the way in which multiple motors can combine to move cargo and mechanisms for cargo transport in the crowded cellular environment remain largely unexplored.
    • there is increasing evidence that cell motility is linked to the physical environment. Interactions between cells, the spreading of cellular layers and the possible role of flow in Morphogenesis are also of interest.
  • Active gel physics

Physics of Microswimmers – Single Particle Motion and Collective Behavior

In pursuit of propulsion at the nanoscale

Biphasic, Lyotropic, Active Nematics

Papers on active matter

Self organization in active matter

Cytoplasmic streaming

A physical perspective on cytoplasmic streaming

Cytoplasmic streaming in plant cells emerges naturally by microfilament self-organization

Spontaneous Circulation of Confined Active Suspensions

Instabilities, pattern formation, and mixing in active suspensions

Spindle self-organization

Physical basis of spindle self-organization

Active system

guillefix 18th June 2016 at 1:21am

A system with constituents that are able to produce their own energy. For instance, they are often self-propelling. See also the wiki article. Due to the energy consumption, these systems are intrinsically out of thermal equilibrium.

If the system is made of bulk matter, it's called Active matter.

Examples of active systems are schools of fish, flocks of birds, bacteria, artificial self-propelled particles, and self-organising bio-polymers such as microtubules and actin, both of which are part of the cytoskeleton of living cells.

Dry active systems


A prominent example of active system are Active colloids

Biophysics (biological systems are active systems).

Flocks, herds, and schools: A quantitative theory of flocking. See Complex systems

DNA nanomachines in DNA nanotechnology

Self-assembled artificial cilia

Microscopic artificial swimmers

Activities and Sensitivities in Boolean Network Models

guillefix 12th July 2016 at 1:47am

See Dynamical Instability in Boolean Networks as a percolation Problem, Boolean network

New paper: Network Structure and Activity in Boolean Networks

Activities and Sensitivities in Boolean Network Models

Boolean functions in which few variables have high importance and most other variables have low importance play a role in eliciting order from Boolean networks.

We should mention in passing that much of the discussion in this Letter can be formulated in terms of spectral methods or harmonic analysis on the nn cube.

Boolean function derivative

Activity

Sensitivity

For a random Boolean function with bias pp (so that each bit in the truth table is 11 with probability pp and 00 otherwise), the probability that two Hamming neighbors are different is equal to 2p(1p)2p(1-p), since one can be 11 (with probability pp) and the other 0 (with probability 1p1-p), and vice versa.

From this one can see that E[αif]=2p(1p)E[\alpha_i^f] = 2p(1-p), and E[sf]=K2p(1p)E[s^f] = K2p(1-p), where EE means expectation value w.r.t. the prob. distribution of the truth tables. We can then conclude that highly biased functions (pp far away from 0.5) are expected to have low average sensitivity.

For a Boolean function ff, a canalizing variable is a variable that determines (canalizes) the value of ff if it has a given value. See the article for more precise definition. A random Boolean function with a single canalizing variable, it is shown here that the expected activity of the canalizing variable is 1/21/2, while that of the rest of the variables is 1/41/4

The average sensitivity (when averaged over all the functions in the network) appears to be a good parameter for predicting whether the dynamics of the Boolean network are ordered or chaotic. This can be observed by looking at Derrida curves.

Activity

guillefix 8th July 2016 at 3:02am

A Process carried out by a sentient being.

Acyclic Network Figure

guillefix 19th January 2016 at 4:35pm

Acyclic networks

guillefix 19th January 2016 at 4:56pm

They can always be drawn with the vertices arranged so that all edges point downward as in Fig 1. Also all cycles that can be arranged like this are acyclic.

Fig 1.

From the proof of this fact one can deduce an algorithm for finding if a network is acyclic or not:

Furthermore, the adjacency matrix of such a graph can always be made upper-diagonal with zeros on diagonal (as no self-loops). The eigenvalues of an acyclic graph are thus all zero. One can also show the opposite, thus:

A network is acyclic if and only if it has a nilpotent adjacency matrix

Additive manufacturing

guillefix 3rd July 2016 at 6:09pm

Adhesive

guillefix 11th May 2016 at 12:56pm

Adhesives is any substance applied to one surface, or both surfaces, of two separate items that binds them together and resists their separation.

Some synonyms: glue, cement, mucilage, or paste

AdS/CFT correspondence

guillefix 24th June 2016 at 1:27am

Aerosol

guillefix 9th May 2016 at 8:53pm

Aerosol is a Colloid of fine solid particles or liquid droplets, in air or another gas.

Aerospace engineering

guillefix 25th June 2016 at 3:28am

Aesthetics

guillefix 21st January 2016 at 9:02pm

Ageing & Longevity

guillefix 28th June 2016 at 3:44pm

Agriculture

guillefix 8th April 2016 at 4:43pm

Agriculture is the cultivation of animals, plants, fungi, and other life forms for food, fiber, biofuel, medicinal and other products used to sustain and enhance human life (https://en.wikipedia.org/wiki/Agriculture)

Agronomy: agriculture of plants

Agriculture & Agronomy

AI in medicine

guillefix 12th July 2016 at 12:58am

AI safety

guillefix 23rd June 2016 at 3:09pm

Algebra (algebraic structure)

guillefix 14th July 2016 at 3:38pm

An algebra is a family RR of subsets of a set XX s.t.:

  • R\emptyset \in R
  • Closed under finite unions. A,BRABRA,B \in R \Rightarrow A \cup B \in R
  • Closed under complements. ARXARA \in R \Rightarrow X \setminus A \in R

If the algebra is closed under countable unions (not just finite), then it is a Sigma-algebra

video

Algebra-like algebraic structures

guillefix 14th July 2016 at 3:35pm

Algebraic geometry

guillefix 29th March 2016 at 4:50pm

Algebraic structure

guillefix 28th June 2016 at 4:42pm

In mathematics, and more specifically in abstract algebra, an algebraic structure is a set (called carrier set or underlying set) with one or more finitary operations defined on it that satisfies a list of axioms.

https://en.wikipedia.org/wiki/Algebraic_structure

Group-like algebraic structures

Ring-like algebraic structures

Lattice-like algebraic structures

Module-like algebraic structures

Algebra-like algebraic structures

Algebraic topology

guillefix 23rd May 2016 at 11:05pm

Algorithm to compute LZ complexity measure

guillefix 27th June 2016 at 10:27pm

Algorithm to compute LZ complexity measure

written in Python

def KC_LZ(string):
    n=len(string)
    s = '0'+string
    c=1
    l=1
    i=0
    k=1
    k_max=1
    stop=0

    while stop==0:
        if s[i+k] != s[l+k]:
            if k>k_max:
                k_max=k # k_max stores the length of the longest pattern in the LA that has been matched somewhere in the SB

            i=i+1 # we increase i while the bit doesn't match, looking for a previous occurence of a pattern. s[i+k] is scanning the "search buffer" (SB)

            if i==l: # we stop looking when i catches up with the first bit of the "look-ahead" (LA) part.
                c=c+1 # If we were actually compressing, we would add the new token here. here we just count recounstruction STEPs
                l=l+k_max # we move the beginning of the LA to the end of the newly matched pattern.

                if l+1>n: # if the LA surpasses length of string, then we stop.
                    stop=1

                else: #after STEP,
                    i=0 # we reset the searching index to beginning of SB (beginning of string)
                    k=1 # we reset pattern matching index. Note that we are actually matching against the first bit of the string, because we added an extra 0 above, so i+k is the first bit of the string.
                    k_max=1 # and we reset max lenght of matched pattern to k.
            else:
                k=1 #we've finished matching a pattern in the SB, and we reset the matched pattern length counter.

        else: # I increase k as long as the pattern matches, i.e. as long as s[l+k] bit string can be reconstructed by s[i+k] bit string. Note that the matched pattern can "run over" l because the pattern starts copying itself (see LZ 76 paper). This is just what happens when you apply the cloning tool on photoshop to a region where you've already cloned...
            k=k+1

            if l+k>n: # if we reach the end of the string while matching, we need to add that to the tokens, and stop.
                c=c+1
                stop=1



    # a la Lempel and Ziv (IEEE trans inf theory it-22, 75 (1976),
    # h(n)=c(n)/b(n) where c(n) is the kolmogorov complexity
    # and h(n) is a normalised measure of complexity.
    complexity=c;

    #b=n*1.0/np.log2(n)
    #complexity=c/b;

Algorithmic information theory

guillefix 14th July 2016 at 3:47am

See Descriptional complexity and MMathPhys oral presentation. See also Information theory, Theory of computation, Complexity theory, and Computational complexity

Good lecture notes for AIT: http://www.cse.iitk.ac.in/users/satyadev/a10/a10.html

Kolmogorov complexity

Plain Kolmogorov Complexity

See Elements of information theory by Cover and Thomas (chap 14)

Conditional Kolmogorov complexity

<x,y><x, y> is the pairing function (see Computability theory). The conditional Kolmogorov complexity is often defined as in Def. 2.0.1, but with yy is l(x)l(x), the length of xx.

Universality of Kolmogorov complexity

For sufficiently long x, the length of this simulation program can be neglected, and we can discuss Kolmogorov complexity without talking about the constants.

Note in the book on info theory, they use the ceiling function for the {number of bits in a binary representation of a number}; however, as mentioned here that fails for multiples of 22, so we need to use log(n)+1\lfloor log(n) \rfloor +1

Bounds

Upper bound on Kolmogorov complexity

where log(x)=log(x)+log(log(x))+log(log(log(x)))+...log^*(x) = \log(x) + \log(\log(x)) + \log(\log(\log(x))) + ...

Lower bounds on Kolmogorov complexity

There are very few sequences with low complexity

Relations to entropy

Kraft inequality

Relation to entropy

as nn \rightarrow \infty. See proof in the book (uses Kraft's inequality, Jensen's inequality, and the concavity of the entropy) Therefore the average Kolmogorov complexity of the string approaches the entropy of the random variable from which the letters of the string are sampled. The compressibility achieved by the computer goes to the entropy limit.

Theorem 14.4.3 There are an infinite number of integers nn such that K(n)>lognK(n) > \log{n}.

Algorithmic randomness and incompressible sequences

Theorem 14.5.1 Let X1,X2,...,XnX_1, X_2, ..., X_n be drawn according to a Bernoulli (12)(\frac{1}{2}) process. Then

P(K(X1X2...Xnn)<nk)<2kP(K(X_1 X_2 ... X_n | n) < n-k) < 2^{-k}

For example, the fraction of sequences of length n that have complexity less than n − 5 is less than 1/32. This motivates the following definition.

Definitions of algorithmic randomness, incompressibility.

Strong law of large numbers for incompressible sequences

In general, we can show that if a sequence is incompressible, it will satisfy all computable statistical tests for randomness. (Otherwise, identification of the test that x fails will reduce the descriptive complexity of x, yielding a contradiction.) In this sense, the algorithmic test for randomness is the ultimate test, including within it all other computable tests for randomness.

We now remove the expectation from Theorem 14.3.1

Universal probability

Imagine a monkey sitting at a keyboard and typing the keys at random.
Probability of an input program (string), pp, is 2l(p)2^{-l(p)}. simple strings are more likely than complicated strings of the same length.

Universality of the universal probability

Remark. Bounded likelihood ratio The likelihood ratio PU(x)/PA(x)P_{\mathcal{U}}(x)/P_{\mathcal{A}}(x) is bounded, and doesn't go to 00 or \infty for any xx, thus no universal probability can be totally discarded relative to any other in hypothesis testing. This is essentially because any universal computer can simulate any other, and in that sense the probability distribution obtained by feeding random input into one is also contained in the distribution obtained in the other.

In that sense we cannot reject the possibility that the universe is the output of monkeys typing at a computer. However, we can reject the hypothesis that the universe is random (monkeys with no computer). 😮

The example indicates that a random input to a computer is much more likely to produce “interesting” outputs than a random input to a typewriter. We all know that a computer is an intelligence amplifier. Apparently, it creates sense from nonsense as well.

The halting problem noncomputability of Kolmogorov complexity

Epimenides liar pradox Godels incompleteness theorem Halting problem

Related: Berry's paradox and Bechenbach's paradox

Chaitin's Ω\Omega

Definition

Properties:

1. Ω\Omega is noncomputable

2. Ω\Omega is a "philosopher's stone", or an oracle. Knowledge of Ω\Omega to nn bits can be used to prove any theorem for which {a proof expressible in less than nn bits exists}.

3. Ω\Omega is algorithmically random.

Theorem 14.8.1. Ω\Omega cannot be compressed by more than a constant; that is, there exits a constant cc such that

K(ω1ω2...ωn)ncK(\omega_1\omega_2...\omega_n) \geq n-c for all nn

Universal gambling

the universal gambling scheme on a random sequence does asymptotically as well as a scheme that uses prior knowledge of the true distribution!

Universal prediction

Occam's razor

......

Coding theorem

Proof involves an extension of the {tree construction used for Shannon-Fano-Elias codes for computable probability distributions} to the uncomputable universal probability distributions.

As stated in the proof in the InfoTheory book, "However, there is no effective procedure to find the lowest depth node corresponding to x". This means that the coding they use in the proof is incomputable. However, they show it exist, and that it can be decoded in finite time, giving a description of the string.


See also Sequence spaces


http://www.scholarpedia.org/article/Algorithmic_information_theory

The discovery of algorithmic probability Seems like vary nice read. Solomonoff's theory of inductive inference

An Introduction to Kolmogorov Complexity and Its Applications (1 cr)

Algorithmic Learning Theory (ALT) 2016

Expanded and improved proof of the relationbetween description complexity and algorithmicprobability

http://www-igm.univ-mlv.fr/~berstel/Articles/2010HandbookCodes.pdf

ALGORITHMS OF INFORMATICS

Algorithmics on compressed objects

guillefix 28th June 2016 at 5:29pm

Algorithms

guillefix 30th June 2016 at 1:42am

Also called imperative knowledge in Computer science

Analysis of algorithms

https://www.youtube.com/watch?v=gwlevtaC-u0&list=PL6ED884C7AEE68027

Discrete algorithms conference papers

[[http://www2.idsia.ch/cms/fun16/

FUN with biological algorithms FUN with combinatorial algorithms FUN with cryptographic algorithms FUN with distributed algorithms FUN with game-theoretic algorithms FUN with geometrical algorithms FUN with graph algorithms FUN with mobile algorithms FUN with Internet algorithms FUN with parallel algorithms FUN with optimization algorithms FUN with randomized algorithms FUN with robotics algorithms FUN with space-conscious algorithms FUN with string algorithms FUN with visualization of algorithms

ALGORITHMS OF INFORMATICS

Algorithm visualizer

https://www.youtube.com/channel/UCC_RpWFSbwHib_LLhHJwB3w/videos?shelf_id=0&view=0&sort=dd

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/

Algorithmics on compressed objects

AlphaGo

guillefix 5th July 2016 at 3:49pm

Alzheimer's disease

guillefix 12th July 2016 at 12:57am

Analysis

guillefix 29th May 2016 at 12:32am

There's many many things, thus it makes sense we look at what happens when we got more and more things

Real Analysis: Lectures by Professor Francis Su

Mathematics - Measure and Integration

Analysis of algorithms

guillefix 1st July 2016 at 2:12am

AofA

Analysis of the Computational complexity of Algorithms, i.e. find out how much time, and how much memory does an algorithm take to run.

Analytic Combinatorics, Part I (Analysis of Algorithms)

Already recognized as important by Babbage, Turing. However, the modern field of analysis of algorithms was started by Donald Knuth, who recognized that mathematics had the tools to analyze algorithms. Things like the following are useful tools for this:

Books: four volumes of The art of computer programming.

Analytic combinatorics

guillefix 27th June 2016 at 10:35pm

Analytic combinatorics is a calculus (set of mathematical tools) for analyzing properties of large combinatorial structures.

Book website of book videocourse

Symbolic method

  • Define a combinatorial class:
    • Define a class of combinatorial objects
    • Define a notion of size (and an associated generating function)
  • Use standard operations to develop a specification of the structure

Result: A direct derivation of a GF equation (implicit of explicit), i.e. an equation that the Generating function must satisfy.

Classic next steps:

  • Extract coefficients
  • Use classic asymptotics to estimate coefficients

Result: Asymptotic estimates that quantify the desired properties.

Symbolic method for unlabelled structures (Ordinary generating function)

Symbolic method for labelled structures (Exponential generating function)


Video course

Analytic Combinatorics, Part II (Analytic Combinatorics)

In Coursera: Analytic Combinatorics

Applications: Analysis of algorithms, Random deterministic automata, ...

Analytical chemistry

guillefix 8th April 2016 at 8:18pm

Anatomy

guillefix 8th July 2016 at 7:02pm

Studies the structure of organisms. Goes together with Physiology, which studies the function of organisms.

BioDigital- 3D Human Visualization Platform for Anatomy and Disease

Tissues

Epithelial tissue. Covers stuff

Connective tissue. Connects stuff (like bones and muscles). Defined by presence of an extracellular matrix. Blood and fat are thus considered connective tissue.

Muscle tissue. Actin & Myosin

Nerve tissue. Neurons, glial cell.

Organs

Organ systems

Angiosperm reproduction

guillefix 28th June 2016 at 4:24am

Animal & veterinary sciences

guillefix 8th April 2016 at 8:25pm

Animating maths

guillefix 20th June 2016 at 5:34pm

Animation

guillefix 4th February 2016 at 9:44pm

Anime

guillefix 17th May 2016 at 10:18pm

Anthropology

guillefix 17th May 2016 at 1:05am

Anthropology is the study of humans and their societies in the past and present.

https://en.wikipedia.org/wiki/Anthropology

Wikipedia's contents: People and self

Anti-ageing innovation

guillefix 30th June 2016 at 3:35am

Bioviva

FIRST GENE THERAPY SUCCESSFUL AGAINST HUMAN AGING

About Deep Knowledge Life Sciences (DKLS), BGRF and Avi Roy, SENS and Aubrey de Grey

http://www.longevityreporter.org/

https://global-longevity-initiative.webflow.io/

PREVENT . RESTORE . PRESERVE

Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. — Dylan Thomas

NLA, CASMI, Oxford and BGRF to develop the Global Healthspan Extension Initiative

two of her own company’s experimental gene therapies:

  • one to protect against loss of muscle mass with age,
  • another to battle stem cell depletion responsible for diverse age-related diseases and infirmities.

Telomeres are short segments of DNA which cap the ends of every chromosome, acting as ‘buffers’ against wear and tear. They shorten with every cell division, eventually getting too short to protect the chromosome, causing the cell to malfunction and the body to age.

“Current therapeutics offer only marginal benefits for people suffering from diseases of aging. Additionally, lifestyle modification has limited impact for treating these diseases. Advances in biotechnology is the best solution, and if these results are anywhere near accurate, we’ve made history”

Note: this is awesome

It remains to be seen whether the success in leukocytes can expanded to other tissues and organs, and repeated in future patients

Gene therapy to save the world

IVAO to announce plans to invest over $1 billion in aging and longevity projects at a conference in St Petersburg.

10 responses to “Hacking Aging” What would you say if I told you that aging happens not because of accumulation of stresses, but rather because of the intrinsic properties of the gene network of the organism? I’m guessing you’d be like: :o .

Antumbr

guillefix 23rd June 2016 at 3:08pm

https://checkvist.com/checklists/563670#

http://beyondplm.com/ PLM/PDM in cloud, in blockchain. supply chain management. healthcare system.

SAP, grabcapd, onshape

Ethereum Provenance

Ascribe, legal stuff

Web development

Apollonian gasket

guillefix 9th May 2016 at 8:04pm

Apollonian networks

guillefix 9th May 2016 at 8:08pm

Appliance

guillefix 5th July 2016 at 4:26am

A device or piece of equipment designed to perform a specific task, typically a domestic one.

Application of percolation models in topography

guillefix 11th June 2016 at 3:20pm

Topography studies features of the surface of the Earth, as well as other planets. These can be described as landscapes Percolation models and Percolation theory have been applied to understand these.

A landscape is a height profile usually defined on a square lattice where each cell’s elevation value at position x represents the average elevation over the entire footprint of the cell (site). Now imagine that the water is dripping uniformly over the landscape and fills it from the valleys to the mountains, letting the water flow out through the open boundaries. During the raining, watershed lines may also form which divide the landscape into different drainage basins. These are important in geomorphology in e.g., water management [113] and landslide and flood prevention [114]

it is possible to determine the watershed lines based on the iterative application of invasion percolation [115].

Another kind of percolation that can occur: Raising the water level makes lakes join together, and eventually a lake that spans the whole landscape may form. However, whether the percolation transition is critical or not depends on the properties of the surface landscape (in particular on correlation functions).

These ideas have been applied to study the topography of the Earth, where they found that the present sea level is a critical level in their model. This finding elucidates the origins of the appearance of ubiquitous scaling relations observed in the various terrestrial features on Earth.

Applied complex analysis

guillefix 28th April 2016 at 2:37am

Oxford course Syllabus: Review of core complex analysis, especially continuation, multifunctions, contour integration, conformal mapping and Fourier transforms. Riemann mapping theorem (in statement only). Schwarz-Christoffel formula. Solution of Laplace's equation by conformal mapping onto a canonical domain. Applications to inviscid hydrodynamics: flow past an aerofoil and other obstacles by conformal mapping; free streamline flows of hodograph plane. Unsteady flow with free boundaries in porous media. Application of Cauchy integrals and Plemelj formulae. Solution of mixed boundary value problems motivated by thin aerofoil theory and the theory of cracks in elastic solids. Reimann-Hilbert problems. Cauchy singular integral equations. Transform methods, complex Fourier transform. Contour integral solutions of ODE's. Wiener-Hopf method.

Jordan's lemma

Archeology

guillefix 7th May 2016 at 3:35am

Architecture

guillefix 1st July 2016 at 11:44pm

Also use tools from Design optimization, including Genetic algorithms applied to grid structure optimization which look really cool.

Simulating rain for architecture

Programming architecture is a company that solves problems in the design and construction phase of complex architectural objects. Offers software and knowledge.

https://www.youtube.com/watch?v=YxJJeU9mVSU&list=PLhOObpoQndRmAGJh1mvnE6ye0z-bmIhxF

Area studies

guillefix 8th April 2016 at 6:06pm

Arithmetic compression

guillefix 28th June 2016 at 4:33am

Arrival of the frequent

guillefix 11th June 2016 at 1:56am

See MMathPhys oral presentation.


Theoretical framework

Following The Arrival of the Frequent: How Bias in Genotype-Phenotype Maps Can Steer Populations to Local Optima (remember notes here are complementary to paper, and don't cover all of its content, only those parts where I thought were gaps for me to understand it), we can study the effect of the structure of the genotype-phenotype (GP) map, in the model of Evolution known as the Wright-Fisher model (see Population genetics). We use the Haploid Wright-Fisher model with selection, where for each individual in the generation at time t+1t+1, we choose a single parent from the individuals at the previous generation tt, according to the rule described there. We then include the effect of mutations, by assigning to the new individual a genotype of length LL as follows:

  • Copy the genotype of parent.
  • For each of the letters in the genotype, replace it with probability μ\mu, the point mutation rate. When you replace, you replace it by a different letter, chosen uniformly at random from the different letters.

Note: the genotype is defined as a sequence of LL letters taken from an alphabet of KK letters.

See Mean field approximation to average number of phenotypes discovered in Wright-Fisher model , some equations are found there. The main result is that the expected number of individuals with genotype pp that arises at generation tt can be approximated as

mp(t)Lμi=1NΦpq=NLμΦpqm_p(t) \approx L\mu \sum_{i=1}^N \Phi_{pq} = N L\mu \Phi_{pq}Eq.3

under certain assumptions, explained in that tiddler.

Polymorphic limit

If NLμ1NL\mu \gg 1, the population naturally spreads over different genotypes, a regime called the polymorphic limit. See Polymorphic limit (Wright-Fisher model) tiddler for details. Main points:

To model neutral exploration, we let 1+sp=δpq1+s_p = \delta_{pq}, where δpq \delta_{pq} is a Kronecker delta

The time when {{the probability of having discovered a p-type individual (produced a p-type offspring)} is α\alpha} is found by:

T=ln1αNLμΦpq T = \frac{-\ln{1- \alpha}}{N L\mu \Phi_{pq}}Eq. 4

Monomorphic limit

Neutral spaces can be astronomically large, much bigger than even the largest viral or bacterial populations (see this paper). In that case, the local neighborhood of the population may not be fully representative of the neighborhood of the entire space.

This scenario can most easily understood in the monomorphic limit: when mutants are rare, NLμ1NL \mu \ll 1

Now, the (average) rate of neutral mutations (per individual) is ν=Lμρ\nu = L \mu \rho, as ρ\rho is the probability that a mutation is neutral.

See more in the Monomorphic limit (Wright-Fisher model) tiddler, and at the paper.

We can see that in the large genome limit, the phenotype pp is found quicker as the population NN increases. However, when the population becomes so large that all the 1-mutation neighbourhood is thoroughly explored (while still staying in the monomorphic limit), the discovery time saturates because increasing the population doesn't increase the number of explored phenotypes (during a fixation period).

These results suggest that for intermediate NLμNL\mu there should be a smooth transition between these two regimes. To quantify the crossover we introduce a factor γ\gamma.

[See Figure 1.]

Simulations in model GP maps

The genotype is defined by:

  • Alphabet length: KK
  • Genotype length: LL

The number of available genotypes is thus KLK^L.

Random GP map:

Apart from specifying KK and LL, we need to specify the set {Fq}\{F_q\} which is the fraction of genotypes mapping to phenotype pp. The map is otherwise random.

In this setting, ϕpq=Fp\phi_{pq} = F_p is a good approximation if Nq,Np1N_q, N_p \gg 1, where NiN_i is the number of genotypes mapping to phenotype ii. These also require NPNGN_P \ll N_G (i.e. {the number of phenotypes } is much less than {the number of genotypes}, i.e the map is very many-to-one).

There is also a percolation threshold at a critical frequency (FpF_p) λ(K)=1K1/(K1)\lambda(K) = 1 - K^{-1/(K-1)}, so that only phenotypes with Fp>λ(K)F_p > \lambda(K) have "completely" connected neutral spaces (in the network where edges correspond to a single-point mutations, or genotypes separated by a Hamming distance of 11). See the theory of percolation in Network science's Newman's book, Oxford notes, and problem sheets. See also Random Induced Subgraphs of Generalized n-Cubes.

Standing variation. Adaptation from standing genetic variation.

RNA secondary structure mapping

RNA genotypes of length LL made up of nucleotides G,C,U and A.

The phenotypes are the minimum free-energy secondary structures for the sequences, which can beefficiently calculated (see Fast Folding and Comparison of RNA Secondary Structures). The number of genotypes grows as 4L4^L,while the number of phenotypes is thought to grow roughly as NP1.8LN_P \sim 1.8^L (see Robustness and Evolvability in Living Systems - Andreas Wagner). Also:

From sequences to shapes and back: a case study in RNA secondary structures. - pdf.

Epistasis can lead to fragmented neutral spaces and contingency in evolution

The Ascent of the Abundant: How Mutational Networks Constrain Evolution

Discovery times are slower than in the random GP map. This reflects the internal structure of the RNA: similar genotypes typically have similar mutational neighbourhoods (see Exploring phenotype space through neutral evolution.), and so the population needs to neutrally explore longer in order to find novelty.

Phenotypic bias leads to a simple, systematic ordering in the discovery of novel phenotypes.

The arrival of the frequent

Comment: The fact that this discussion requires speaking about a change in the environment is what makes "the arrival of the frequent" a non-equilibrium effect, I think. Compare this with the survival of the flattest which is an equilibrium effect.

We need to have s2,s11/(2N)s2, s_1\gtrsim 1/(2N) because the probability of fixation is (see here (page 201) or here, or here (page 326)):

P=1eqs1e2NsP = \frac{1-e^{-qs}}{1-e^{-2Ns}}

So for 2Ns12Ns \gtrsim 1, q(2Ns)2N(1e(2Ns))>q2N\frac{q(2Ns)}{2N(1-e^{-(2Ns)})} > \frac{q}{2N}. We need to have s2,s11/(2N)s2, s_1\gtrsim 1/(2N) so that the probability of fixation of the two alternative phenotypes is considerably larger than that of the initial phenotype qq, for which s=0s=0. In here (page 321) an expression for the case of NN very large is derived without using diffusion approximation.

A more frequent phenotype (p1p1, with ϕp1q\phi_p1q much larger than competitor p2p2) is favoured via two related effects:

  • It is discovered much earlier, and so it has a chance to fix before p2p2.
  • Because the discovery time-scale is much smaller, if {we are in the large population monomorphic limit, so that the single-mutation neighbourhood is explored many times before population fixes to a new genotype}, then p1p1 is also visited more often. Therefore its fixation probability is larger. Say that p1p1 is visited nn times, then the probability that it fixes is 1(1p)nnp1-(1-p)^n \approx np, where pp is the probability that it fixes when it's visited once (1/N1/N without selection bias), and it is much smaller than 11 for the approximation (NN large). >Isn't this the same as the result that "The rate of fixations is equal to the rate of (neutral) mutations of an individual." derived in Monomorphic limit (Wright-Fisher model)? Yes.This is observed in their microscopic models (On the significance of neutral spaces in adaptive evolution). This effect is ignored in origin-fixation models (see Bias in the introduction of variation as an orienting factor in evolution) Hm, how exactly is it ignored? haven't read the paper yet... Well, from what Ard said, the way they ignore is that they ignore the short-time correlations in the monomorphic limit, discussed in the paper.

Another effect that often positively correlates with the frequency of a phenotype is Mutational robustness (see Robustness and evolvability: a paradox resolved and Epistasis can lead to fragmented neutral spaces and contingency in evolution). Mutational robustness has been shown to offer selective advantage at high mutation rates, because phenotypes which are not robust will often mutate to deleterious mutants and probably go extinct, while phenotypes which are robust will survive. This effect is called the "survival of the flattest", as robust phenotypes correspond to "flat" regions in the fitness landscape (see the paper). This effect can also be understood in terms of free fitness (see Free fitness that always increases in evolution), in analogy to "free energy" in Statistical physics (see The application of statistical physics to evolutionary biology), as it incorporates an entropy-like term accounting for the size of the neutral space of the phenotype.

However, {the arrival of the frequent} is a non-equilibrium effect (unlike {the survival of the flattest} which assumes equilibrium or pseudo-equilibrium). This is because it describes how discovery times and discovery frequency depend on the phenotypes frequency (FpF_p), after a change in the environment, when the system is out-of-equilibrium.

For the monomorphic limit (small mutation rate, in Figure 4.), the probability.....

Summary/Discussion

Genotype-phenotype (GP) maps are observed to be highly biased: Some phenotypes are realised by orders of magnitude more genotypes than most other phenotypes.

Large bias observed in the GP maps translates into a similar order of magnitude variation in the median discovery times TpT_p for a range of population genetic parameters. However, correlations in the GP map can cause the relation between TpT_p and phenotype frequency FpF_p to have large fluctuations (for example, phipqphi_{pq} (which determines TpT_p) can be 00 even if FpF_p is quite large).

For the GP mas studied, the strong bias in the GP map leads to a systematic ordering of the median discovery times of alternative phenotypes, an effect that we postulate may hold for other GP maps as well.

The correlations in the RNA GP maps mean that close genotypes have similar neighbourhoods, so that one needs to explore further to reach truly new {genotype neighbourhoods}. This is why the fitting parameter γ\gamma is smaller than the value expected in the mean-field approx. This is also why for very similar values of ϕpq4\phi_{pq}4, there is a range of values of TpT_p spanning about 11 order of magnitude. This probably means that it takes up to 10\sim 10 generations to {reach truly novels genotypic neighbourhoods} in the {neutral exploration}. Still, the many orders of magnitude range observed in ϕpq\phi_{pq} dominates the variation in phenotype discovery times (TpT_p), providing a a posteriori justification for the mean-field approximation.

It is reasonable to expect all these features to arise in other GP maps found in natural (or in artificial systems), including biological systems.

Taken together, these arguments suggest that the vast majority of possible phenotypes may never be found, and thus never fix,even though they may globally be the most fit: Evolutionary search is deeply non-ergodic (I think that this is in the sense that we don't quite reach equilibrium on reasonable time scales, or that the observation time-scales needed for the system to appear ergodic are much larger than those used in experiment. However, this is also true in many other systems like particles in a gas; however, this other system doesn't show the bias needed for the Arrival of the frequent effect).

When Hugo de Vries was advocating for the importance of mutations in evolution, he famously said ‘‘Natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest’’ [2]. Here we argue that the fittest may never arrive.Instead evolutionary dynamics can be dominated by the ‘‘arrival of the frequent’’.


Older comments:

So I think what he was talking about is that we can construct a network of phenotypes which is a projection of the network of genotypes via the genotype-phenotype map. Links in the network of genotypes are possible mutations, and all genes have the same degree. However, not all nodes have the same degree in the network of phenotypes.

We can then apply results from network theory of the steady distribution for a random walker on a network.

So this sets a bias on the distribution on the phenotype network.

Over this bias there will be the fitness surface.

Art

guillefix 25th June 2016 at 5:03pm

That subtle aspect of the human mind committed to the creation of new structures in the World.

That is, by virtue of its extreme complexity, human brains are able to catalyze equally complex structures out the molecular chaos that excites the neurons in random ways. These structures can resonate with the brain/minds of other people in ways that evoke emotion, and may be thus called beautiful by these people.

For this reason art may be considered as a language or a means for communicating emotions.

Art itself has been studied, but even more often, it has been practised, creating a vast amount of works of art throughout human history. The classification of these works, though of course, blurry, is based in part on the medium the art is expressed on, and which senses are primarily used to experience it.

Other interesting definitions discussed here: Is Programming Art? - MPJ's Musings - FunFunFunction #33

Portal:Contents/Culture and the arts

See also Technology & Engineering


Fund artists! https://www.patreon.com/


http://www.openculture.com/2015/03/download-422-free-art-books-from-the-metropolitan-museum-of-art.html

http://michaelnielsen.org/blog/the-artist-and-the-machine/

Artifact

guillefix 19th April 2016 at 10:56pm

Artificial and machine intelligence

guillefix 21st June 2016 at 4:16pm

Computational intelligence - Scholarpedia

Oxford course (with video) Deep learning.

Youtube playlist by mathematicalmonk

Hugo latochelle YB videos

Read Neural Turing machines paper

See also: Evolutionary computing, Bio-inspired computing, Sloppy systems


Artificial intelligence (AI) has the overall goal of understanding and engineering intelligence, behaviour that involves understanding, and higher cognitive functions. It is a broad and very interdisciplinary field. It feeds to and from Machine learning, Logic, Cognitive science, Neuroscience, etc.

Oxford's society OxAI


Machine intelligence, is essentially a synonym of AI, but with the connotation of using machines and computers to create and understand intelligence. The biggest part of it, Machine learning, deals with the problem of extracting features from data (learning) so as to solve (mostly) predictive tasks.


Uses


Companies and projects


Miscellaneous notes from first Nando's first deep learning lecture


Challenges: One-shot learning, multi-task & transfer learning, scaling and energy efficiency, ability to generate data (e.g. vision as inverse graphics), architectures for AI.


See more at Machine learning


Mathematical modelling of neural networks

Why Deep Learning models perform so well?

Seems to be a result of:

  • Very large datasets
  • Increasing computing power
  • Flexibility of the models. Lots of parameters when lots of layers. Furthermore multiple layers avoide the curse of dimensionality

Eric Drexler - A Cambrian explosion in Deep learning


A Gradient Descent Method for a Neural Fractal Memory

https://www.oreilly.com/ideas/the-current-state-of-machine-intelligence-2-0

Computer vision

Artificial chemistry

guillefix 18th June 2016 at 2:03am

Artificial chemistry

http://tuvalu.santafe.edu/~walter/AlChemy/alchemy.html

Arrival of the fittest The modern evolutionary synthesis based on Population genetics has an existence problem: it assumes the existence of individuals, genes, alleles, etc. They propose a simple model abstracting from chemistry to explain the Self-organization of self-maintaining and self-replicating structures, necessary for the origin of life Darwinian Evolution.

They point out related work in autopoiesis, concurrent computation, a "chemical abstract machine", autocatalytic reaction networks.

Review paper

Artificial Chemistries

Artificial intelligence

guillefix 12th July 2016 at 12:53am

Artificial intelligence innovation

guillefix 1st June 2016 at 7:18pm

Artificial neural network

guillefix 12th July 2016 at 1:11pm

Aka artificial neural network..

A particularly useful way of representing functions, for problems in Machine learning. It is a very good model for many problems, and learning algorithms produce very good results with them. In particular deep learning (which uses ANNs with many layers).

Hugo Larochelle class videos on [2.9]

Neuron has:

1) inputs

2) weight vectors, that multiplies the input vector or activation vector of hidden layers.

3) bias, that is added to result

4) activation function takes as argument the result of the above (called pre-activation or input activation)

5) The result (called activation) may be the input of other neurons in the next layer, in a multilayer feedforward neural network.

6) The activation of the last layer, is the output

Overall... we are multiplying by matrices and applying simple nonlinear function

Universal approximator theorem

See paper mentioned in Hugo's vid, single hidden layer ANNs can approximate any continuous function with sufficiently many neurons in the hidden layer. There may not be a learning algorithm to find the right parameter set though.

Optimization

Learning by minimizing cost funciton

Using SDG. An efficient algorithm to compute the gradients of the loss function w.r.t. the ANN's parameters is backpropagation.

Backpropagaion. It effectively uses the chain rule to compute the gradient w.r.t. parameters at one layer with the values of the gradients w.r.t. parameters at the layer above (deeper).

Efficient BackProp

[image above, wait until it loads, you also need to be signed into google]

Video

See this too

Why backprop is more efficient than naive approach

Derivatives wrt the input give you a way of knowing which part of the input is determining the classification, i.e. where is the cat in the image, for example

Types of neural networks


Mathematical modelling of neural networks


A Neural Network in 11 Lines of Python

More models, and generalizations

Backpropagation, temporal networks, etc..

Visualizing and Understanding Deep Neural Networks by Matt Zeiler


Physical implementations:

Chemical implementations of neural networks and Turing machines

http://knowmtech.com/


More

Layerless neural networks? See Chico Calmagro's work with Ard Louis.

On the complex backpropagation algorithm

Neural networks for control systems—A survey

Genetic deep neural networks using different activation functions for financial data mining

Structure Discovery of Deep Neural Network Based on Evolutionary Algorithms

Genetic algorithms for evolving deep neural networks

Busqueda de la estructura optima de redes neurales con Algoritmos Geneticos y Simulated Annealing. Verificacion con el benchmark PROBEN1

Implementation of Evolutionary Algorithms for Deep Architectures

See ideas here: Idea for neural network for chemical synethesis and manufacturing etc. Facebook post: https://www.facebook.com/guillermovalleperez/posts/10153853693416223?

Statistical mechanics of neural networks

Neural networks and physical systems with emergent collective computational abilities

Spin-glass models of neural networks

Learning and pattern recognition in spin glass models

Neural nets : classical results and current problems

Artistic movements & styles

guillefix 8th April 2016 at 5:51pm

Assembly (programming language)

guillefix 30th June 2016 at 1:12am

Assortative mixing

guillefix 16th March 2016 at 10:09pm

See Measures and metrics for networks

Homophily or assortative mixing is a bias in favour of connections between network nodes with some similar characteristics.

Assortative mixing by enumerative characteristics

Enumerative (a.k.a categorical) characteristics are those where the possible values don't have any particular metric for being close (i.e. a distance function). Eg.: gender, school

Measure given by modularity:

Q=12mij(Aijkikj2m)δ(ci,cj)Q=\frac{1}{2m}\sum_{ij} \left ( A_{ij}-\frac{k_i k_j}{2m} \right) \delta(c_i, c_j)

where δ(ci,cj)\delta(c_i, c_j) is the Kronecker delta which is 1 if the category of ii, cic_i is the same as that for jj, and 00 otherwise. Another way to write it turns out to be:

Q=r(errar2)Q=\sum_r (e_{rr}-a_r^2)

where erse_{rs} is the fraction of edges that join nodes of type rr to nodes of type ss, and ara_r is the fraction of ends of edges attached to nodes of type rr. If we generalize to weighted networks then kk would be the strength, i.e. the weighted degree; and erse_{rs} would be fraction of edge weights joining nodes in the two sets, and ara_r would be fraction of half the edge weight assigned to nodes in set rr.

This is just equal to the number of edges connecting vertices of alike type, minus the expected such number for a random network (with degrees distributions for each category fixed).

Bij=Aijkikj2mB_{ij} = A_{ij}-\frac{k_i k_j}{2m} is called the modularity matrix.

The normalized modularity (normalized by its maximum value when all edges fall between alike edges is called an assortativity coefficient.

Assortative mixing by scalar characteristics

By scalar charactersitics we means does that have a metric that gives a notion of closeness so that two nodes can be approximately alike (age, etc.).

Measure by a Pearson coefficient (i.e. a normalized covariance) for the correlation of the value of the scalar xix_i at the two ends of the edge. The covariance turns out to be:

Q=12mij(Aijkikj2m)xixjQ=\frac{1}{2m}\sum_{ij} \left ( A_{ij}-\frac{k_i k_j}{2m} \right) x_i x_j

and one can divide by its max value to get an assortativity coefficient.

If positive assortativity, sometimes the network is said to be stratified. Others nonlinear kinds of correlations may not be detected by the Pearson coefficient (for example low and high xx being more often connected with intermediate xx). Other information theoretic measures may then be used, or a scatter plot of xix_i vs xjx_j for visual insight, as in figure below:

Note that in this figure, the values, 9, 10, 11, 12 are bins, and the positions of points (which represent edges, or pairs of nodes) within each bin is just used to visually aid in identifying blocks with more density.

Assortative mixing by degree

Degree is special case of scalar because degrees may be close to one another (using usual distance function on integers), so use same formula.

If a network shows assortative mixing by degree, it often displays a core (with high density of nodes) and a periphery (with low) structure See (a). If it shows dissasortative mixing by degree, it often shows star like features and is more uniform See (b).

There appears to be another definition of a quantity called assortativity in this review.


A network partition with Q>0Q>0 exhibits "assortative mixing"
A network partition with Q<0Q<0 exhibits "disassortative mixing"

Can also rewrite the assortativity coefficient in this case as a Pearson coefficient for the distribution of the "excess degree" of nodes (i.e. follow an edge to a node and look at distribution of remaining stubs). See page 5 in notes.

Notes:

Given some network and two partitions (assignment of nodes to categories), we can calculate their modularities, and find which is "more modular"

Maximizing GG is a good way of finding "communities" of densely-connected nodes with sparse connections between those sets.

Can define scalar measure of assortativity. See page 3 in notes.

assortative_dissasortative.png

guillefix 13th February 2016 at 5:36pm

Astronomy

guillefix 5th July 2016 at 3:22am

Astrophysics

guillefix 5th July 2016 at 3:15am

Asymptotic analysis

guillefix 26th June 2016 at 4:36pm

Asymptotic approximation

guillefix 11th June 2016 at 5:48pm

Handout from lecture

Convergence ...

Asymptoticness ..., is often more useful in practice, because truncated series give good results, while convergent series often don't unless you take many terms

Asymptotic approximation (or asymptotic expansion)... An example is an asymptotic power series

See notes for definitions.

Order notation

Big O: f=O(g)f=O(g) as ϵ0\epsilon \rightarrow 0

(f could be asymptotic to const*g, or much smaller)

Small o: f=o(g)f=o(g)

f is strictly much less than g

Strict order: f=ord(g)f=\text{ord}(g)

f is strictly of order g, i.e. asymptotic to some constant times g.

Uniqueness of asymptotic series

If a function posesses an asymptotic approximation in terms of an asymptotic sequence, then that approximation is unique for that particular sequence.

Note that the uniqueness is for a given sequence. A single function may have many asypmtotic approximations, each in terms of a different sequence.

Note also that the uniqueness is for a given function: two functions may share the same asymptotic approximation, because they differ by a quantity smaller than the last term included. Two functions sharing the same asymptotic power series, as above, can only differ by a quantity which is not analytic, because two analytic functions with the same power series are identical.

Asymptotic approximations can be naively added, subtracted, multiplied or divided, resulting in the correct asymptotic expression for the sum, difference, product or quotient, perhaps based on an enlarged asymptotic sequence.

One asymptotic series can be substituted into another, although care is needed with exponentials.

Asymptotic expansions can be integrated term by term with respect to ϵ\epsilon resulting in the correct asymptotic expansion of the integral. However, in general they may not be differentiated with safety, i.e., when differentiating there is always the worry that neglected higher-order terms suddenly become important.

Numerical use of divergent series

Optimal truncation: Truncating at the smallest term is known as optimal truncation.

Parametric expansions

So far we have been considering functions of a single variable as that variable tends to zero. Such problems often occur in ordinary and especially partial differential equations when considering far field behaviour for example, and there are known as coordinate expansions.

More common is for the solution of an equation to depend on more than one variable, f(x;ϵf (x; \epsilon) say. Often we have a differential equation in the independent variable xx which contains a small parameter ϵ\epsilon, hence the name parametric expansion. For functions of two variables the obvious generalisation is to allow the coefficients of the asymptotic expansion to b e functions of the second variable:

f(x,ϵ)n=0an(x)δ(ϵ)f(x, \epsilon) \sim \sum_{n=0}^\infty a_n(x) \delta(\epsilon) as ϵ0\epsilon \rightarrow 0

Asymptotic approximation of integrals

guillefix 28th April 2016 at 2:40pm

Integration by parts (IBP)

See examples in notes, and problems.

One has to choose the right functions. Nice because it gives error term explicitly, and can often be bounded.

Trick of separating integral domain.

Failure of integration by parts

General rule: Integration by parts will not work if the contribution from one of the limits of integration is much larger than the size of the integral.

It can still fail in other cases, if for some reason the terms in the expansion can't be generated by the IBP.

Laplace-type integrals

I(x)=abf(t)exϕ(t)dtI(x) = \int_a^b f(t) e^{x\phi(t)} dt   as xx\rightarrow \infty

Laplace method

For ϕ(t)\phi(t) real. Contributions near global maxima of ϕ(t)\phi(t).

Method of stationary phase

For ϕ(t)\phi(t) imaginary. Contributions regions of stationary phase ψ(t)\psi(t) (where ϕ(t)=iψ(t)\phi(t)=i\psi(t)).

Method of steepest descents

Most general and powerful. For ϕ(t)\phi(t) generally complex, and the integral being along a complex contour in general too.

Splitting range of integration

splitting the range of integration and using different approximations in each range.

See examples


Bounding integrals

Trick I use similar to IBP

Athanasius Kircher

guillefix 12th June 2016 at 4:25pm

https://en.wikipedia.org/wiki/Athanasius_Kircher

See The Horn of Alexander the Great

https://web.stanford.edu/group/kircher/cgi-bin/site/?page_id=517

Speaking tubes connected to statues

Hydraulic organ

http://machinamenta.blogspot.co.uk/2013/07/athanasius-kircher.html

One of the sources I used about Kircher is now available online. Beyond his ideas about organizing knowledge and automating art, his books are just a kick to look through. They're almost like illustrations for encyclopedia articles, but then you see a dragon, or a ladder to the center of the earth, or a mountain in the shape of a ma

Divisimus altitudinem Turris ad Lunam usque in 5 partes; quatrum unaquæque continet 50 semidiametros globi terreni, 2 semidiametros iuxta distantiam Lunæ proxima a centro terre 52 semidiametrum geocosmi; unde luculenter concluditur globum terrestrem pondere Turris extra centrum motum fuisse tanto spatio quantu est intercapedo inter O et N. Videbis pariter pondus Turris globo terrae M.L. æquilibratum multum excessisse pondus globi terræ.

Magic lantern

System of subterranean fires

Atmosphere

guillefix 10th July 2016 at 4:21am

Atmospherical physics

guillefix 10th July 2016 at 4:22am

What keeps clouds together? SX question

atom_trapped_in_liquid.png

guillefix 7th February 2016 at 6:33pm

Atomic physics

guillefix 22nd June 2016 at 5:05am

Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus.

See Oxford course

MIT 8.421 Atomic and Optical Physics I, Spring 2014

See Atomic structure, Quantum mechanics

Atomic structure

guillefix 22nd June 2016 at 5:06am

Structure of the periodic table

The periodic table is mostly determines by the electronic structure of atoms (see Atomic physics). See also Chemistry

There are three rules of thumb, which were discovered phenomenologically (I think), but are justifiable from quantum mechanics:

Aufbau principle: Shells should be filled starting with the lowest available energy state. An entire shell is filled before another shell is started.

Madelung’s Rule: The energy ordering is from lowest value of n+ln + l to the largest; and when two shells have the same value of n+ln + l, fill the one with the smaller nn first.

(Mandelung's rule)

Hund's rules

Teaching Atomic Structure: Madelung’s and Hund’s Rules in One Chart

Atomically precise manufacturing

guillefix 20th May 2016 at 3:59am

Standard solution-phase reactions based on reactivities of different sites in molecules, and don't offer control of relative positioning, other than by statistical mechanics.

Stereotactic chemical reactions use molecular/supramolecular components to guide the relative positioning of reactive components.

It has been demonstrated using tip-based methods to juxtapose reactive molecules, or to remove hydrogen from hydrogen-passivated silicon (111) surfaces.

Ribosomes are natural examples.

Method by Turberfield et al uses molecular motors to guide reactive monomers to make polymers with controlled sequence. Very slow for bulk production, but maybe good for research..


Nano 3D printer scheme for APM.

Talk at Martin School on Jan 2016

Other talk by John Randall

Talk by Merkle. Mechanosynthesis, etc.


Structural DNA nanotechnology for APM:

Mechanical design of DNA nanostructures "As the applications of DNA nanotechnology expand, a consideration of their mechanical behavior is becoming essential to understand how these structures will respond to physical interactions. "

Direct Design of an Energy Landscape with Bistable DNA Origami Mechanisms "Recently we have demonstrated the possibility of implementing macroscopic engineering design approaches to construct DNA origami mechanisms (DOM) with programmable motion and tunable flexibility. "

Artificial molecular machines. Large collection of molecular machines, from the fruitful field of supramolecular chemistry, mainly. See book "molecular machines" by Ross Kelly.

Artificial molecular machines (2000) Artificial molecular-level machines.

Light powered molecular machines.

Computational Design of a Family of Light-Driven Rotary Molecular Motors with Improved Quantum Efficiency.

http://nextbigfuture.com/2011/03/philip-moriarty-discusses.html

http://www.softmachines.org/wordpress/?p=205

http://www.nottingham.ac.uk/~ppzstm/research.php

http://www.molecularassembler.com/Nanofactory/

http://cofes.com/ADMIN-STUFF/Video/Video-Player/VideoId/489/Mark-Sims-Industry-Update-On-Design-Tools-For-The-Nano-Scale.aspx


More molecular machines

More animated simulations from NanoEngineer here

Source

Atomistic Design and Simulations of Nanoscale Machines and Assembly

http://www.wag.caltech.edu/gallery/gallery_nanotec.html

https://www.cgl.ucsf.edu/chimera/data/smart-team-jan2009/smart.html

http://www.imm.org/research/parts/

Chimera molecular modelling software system. Nice

See Computational chemistry

https://www.cgl.ucsf.edu/chimera/data/smart-team-jan2009/smart.html


Diamond mechanosynthesis: http://www.molecularassembler.com/


Some examples from Nature


Books:

"Nanosystems" by Eric Drexler (1992)

"molecular machines" by Ross Kelly (2005)

"Nanoelectronics and Nanosystems" Karl Goser et al. (2004)

ATP

guillefix 8th July 2016 at 6:04pm

Adenosine triphosphate

Molecule that stores chemical energy for the Cell. Produced by Cellular respiration

Adenine + Ribose + 3 phospate groups

ATP hydrolyses to ADP and phosphate, by unbonding one of the phosphate groups, and releasing eneryg.


https://www.wikiwand.com/en/Adenosine_triphosphate

Audiovisual engineering

guillefix 25th June 2016 at 3:37am

AugMath

guillefix 19th July 2016 at 4:56pm

METEOR IS GOING TO STOP SUPPORTING FREE METEOR.COM HOSTING, CHANGE TO HEROKU OR SOMETHING

Get examples from here: https://brilliant.org/

https://keep.google.com/u/0/#search/text=augmath

Automatic simplification is kept to a minimum, to allow notation tricks used in practice for manipulation. I the future, a setting for which level of auto simplification is desired should be added.

It's hard to practice defensive programming when you are trying to give users so much freedom.

It's interesting how having several different representations of the math (in the latex, the math tree, and the html), I think we can be more efficient by seizing the most appropiate one for each task (like checking some property)

Parsing

AugMath at the moment does parsing, but without much validation.

Animating maths

Inputing maths

http://mathdox.org/formulaeditor/ Check this!!

MathQuill Slack channel: https://mathquill.slack.com/messages/mathquill/ See also their website

Displaying maths

KaTeX, MathJax http://docs.mathjax.org/en/latest/advanced/extension-writing.html https://github.com/mathjax/MathJax-third-party-extensions/tree/master/physics

Interacting with maths

Maple Clickable maths

This is just what I meant when I said AugMath aims for Virtual Reality as a platform. And it is awesome: https://vimeo.com/150928998


Computer algebra system

Check Ket algebra editor

Geometry software

http://www.cinderella.de/tiki-index.php

GeoGebra

Mathematical document

https://trac.omdoc.org/OMDoc

Mathematical markup language

Handwritten math recognition: https://www.facebook.com/groups/hackathonhackers/permalink/1265209943534488/

http://cat.prhlt.upv.es/mer/

Other mathematical software

http://www.matracas.org/sentido/


See stuff in GKeep and KTreeTop in Dropbox

http://cognitivemedium.com/emm/emm.html

http://immersivemath.com/ila/ch02_vectors/ch02.html

Mathematical markup language

http://worrydream.com/KillMath/

Automata theory

guillefix 15th July 2016 at 7:07pm

Related to Theory of computation.

Automata theory is the study of abstract machines or automata, as well as the computational problems that can be solved using them

An automaton (plural: automata or automatons) is a self-operating machine, or a machine or control mechanism designed to follow automatically a predetermined sequence of operations, or respond to predetermined instructions. Automata include finite-state machines , etc.

Finite automaton

Input affects dynamics

Finite-state machine

Input affects initial state

Discrete dynamical system (e.g., networks of automata)

Output

Finite-state transducer, a FSM with output from transitions.

Symbolic dynamics, Discrete dynamical system with output from states visited.

Ifinite automaton

Finite-state machine + infinite data structure

Networks of automata

Cellular automata

Graph dynamical system

For instance a Boolean network


See Formal language

Computer - Theory of Automata, Formal Languages and Computation


Krohn–Rhodes theory


a new approach to formal language theory by kolmogorov complexity

http://www.eecs.wsu.edu/~ananth/CptS317/Lectures/IntroToAutomataTheory.pdf

Automata, Computability, and ComplexityOr, Great Ideas in Theoretical Computer Science Spring, 2010

Grail: finite automata and regular expressions

FAdo Symbolic Manipulation of Code Properties FAdo Documentation

http://fado.dcc.fc.up.pt/software/

pyfst: OpenFst in Python

Build your own finite transducer: http://examples.mikemccandless.com/fst.py?terms=pepe%2F33%0D%0Amoth%2F1%0D%0Apop%2F2%0D%0Astar%2F3%0D%0Astop%2F4%0D%0Atop%2F5%0D%0A&cmd=Build+it%21

https://www.google.es/search?safe=off&q=Automata+Studies&stick=H4sIAAAAAAAAAONgFuLSz9U3SDYsMcwrVkKwc7R4nPLzs4MzU1LLEyuLAdpMsUQoAAAA&sa=X&ved=0ahUKEwiy8aPe1ZvMAhWMSRoKHVBOA0wQxA0IowEwEQ&biw=1605&bih=965

FSM in Sage

https://en.wikipedia.org/wiki/Alternating_finite_automaton

http://www.cmi.ac.in/~kumar/words/

Automata-based descriptional complexity

guillefix 15th July 2016 at 9:34pm

A computable class of Descriptional complexity measures, based on automata

Automatic complexity

Automatic complexity of strings smallest number of states of a DFA (deterministic finite automaton) that acceptsxand does not accept any other string of length |x|. Note that a DFA recognizing the singleton language {x} always needs |x|+1 states, which is the reason the definition considers only strings of length |x|.

Automaticity

AUTOMATIC SEQUENCES

Automaticity I: Properties of a Measure of Descriptional Complexity

Automaticity II

is an analogous descriptional complexity measure as Automatic complexity but for languages.

Finite state dimension

The finite-state dimension is defined in terms of computations of finite transducers on infinite sequences,

Entropy rates and finite-state dimension

Finite-state dimension and real arithmetic ☆

Finite state complexity

Newest measure in this area.

Paper: http://www.sciencedirect.com/science/article/pii/S0304397511005408

Finite state complexity defines smallest length of input that will produce result under finite transducer (finite state machine with output basically, which i think can describe the GP maps). Then we can apply Ards argument of how many ways of fitting this shortest string in the fixed-length input of interest (say the genotype). This could be the beginning of the formal theory we need! We probably would also want to develop a concept of algorithmic probability (like Salomonoff's) for finite state machines.

Finite-State Complexity and Randomness

Finite-State Complexity and the Size of Transducers

Finite state transducer Finite model theory


Others

NFA based complexity

Approximating the smallest grammar: Kolmogorov complexity in natural models. However, the model allows the advice strings to be over an arbitrary alphabet with no penalty in terms of complexity and, as observed in [8], consequently the NFAs used for compression can always be assumed to consist of only one state... (so not very good measure).

State complexity

State complexity of regular languages

Automated economy

guillefix 9th April 2016 at 1:09pm

Backend web development

guillefix 30th June 2016 at 1:07am

Frameworks

node.js

Meteor (JS)

Hosting

Nice easy tutorial to deploy meteor apps on DigitalOcean

Domains

Setting up DNS records for github pages Remember DNS nameservers are servers that contain DNS records connecting domain names mapped to IP addresses (and more complicated things too). Domain registrars, which let you manage domains you own may offer their own DNS nameservers, or may offer you the capability to use a third-party name server (like Namecheap's FreeDNS) to direct domain names to IPs.

Logstalgia: visualization of http request to a server


http://kubernetes.io/

Backwards Fokker-Planck equation

guillefix 27th April 2016 at 1:59am

Basic results in probability theory

guillefix 7th July 2016 at 6:17pm

Expected number of times I get a certain outcome for a set of random variables with the same sample space, but potentially different and dependent probability distributions

Imagine I have two random variables (XX and YY) each of which can have value AA or BB. Imagine I want to know the expected number of As I get. This will be:

E[number of As]=1p(X=A and Y=B)+2p(X=A and Y=A)E[\text{number of }A\text{s}]=1\cdot p(X=A\text{ and }Y=B)+2\cdot p(X=A\text{ and }Y=A) +1p(X=B and Y=A)+1\cdot p(X=B\text{ and }Y=A)

=(p(X=A and Y=B)+p(X=A and Y=A))=(p(X=A\text{ and }Y=B)+p(X=A\text{ and }Y=A)) +(p(X=B and Y=A)+p(X=A and Y=A))+(p(X=B\text{ and }Y=A)+p(X=A\text{ and }Y=A))

=p(X=A)+p(Y=A)=p(X=A)+p(Y=A)

And this result works whether XX and YY are independent random variables or not. The only thing we require is that getting X=AX=A and X=BX=B are mutually exclusive (and similarly for YY).

Inclusion-exclusion principle

https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle

Basin of attraction

guillefix 21st July 2016 at 3:23pm

Bayesian inference

guillefix 9th July 2016 at 3:12am

Bayesian statistics

guillefix 22nd May 2016 at 3:49pm

Beetle (insect)

guillefix 31st May 2016 at 12:19am

Behavioural sciences

guillefix 5th July 2016 at 3:56am

https://en.wikipedia.org/wiki/Behavioural_sciences

The study of the behaviour of animals and humans, with a focus on individual behaviour. For the study of collective behaviour see Social sciences.

Belief system

guillefix 8th July 2016 at 3:07am

Portal:Contents/Religion and belief systems

A belief system can refer to a Religion or a world view, i.e. a framework of ideas and beliefs through which an individual interprets the world and interacts in it.

Wikipedia:Portal/Directory/Philosophy, religion, and spirituality

Betweeness centrality

guillefix 15th February 2016 at 11:55pm

See Measures and metrics for networks

Measures the extent to which a node (or edge, or other substructure) lies on paths between other vertices. These paths can be defined in many ways, but often they are taken to be geodesic paths.

This is a measure of importance because imagine nodes in the network are sending messages between them, we could be interested how often these messages pass through certain nodes or edges under certain assumptions (like that they follow geodesic paths). Vertices with high betweeness but ranking low on other centrality measures can be for example vertices that connect two barely connected "components". Vertices like this are called brokers in socilogical literature.

If we use the geodesic node betweeness the definition is:

Bno(i)=j.nGψj,n~(i)ψj,n~B_{\text{no}}(i)=\sum_{j.n \in G}\frac{\tilde{\psi_{j,n}}(i)}{\tilde{\psi_{j,n}}}

where ψj,n~(i)\tilde{\psi_{j,n}}(i) is the number of geodesic paths between j & n that traverse i. ψj,n~\tilde{\psi_{j,n}} is the total number of geodesic paths between j & n.

For directed, same but take direction of paths into account...

Can also define geodesic edge betweeness in similar fashion:

Be(i,l)=j.nGψj,n~(i,l)ψj,n~B_{\text{e}}(i,l)=\sum_{j.n \in G}\frac{\tilde{\psi_{j,n}}(i,l)}{\tilde{\psi_{j,n}}}

with obvious generalization of quantities. This is useful for example in road traffic analysis where we are interested in roads not in junctions.

Some problems with robustness:

Another extension is flow betweeness which is defined as the amount of flow through vertex i when the maximum flow is transmitted from s to t, summed over pairs s and t in the network. To see more about flow see Independent paths, connectivity, and cut sets (Graph theory). The problem with this definition is that it sometimes doesn't give a unique answer because the same maximum flow can be achieved using different choices of independent paths. The usual definition is then to define the flow betweeness to be the maximum value that this number can take.

This still has some disadvantages because it doesn't take into account all paths, because it assumes paths are somehow optimal (although in different ways.

A variant that does take all paths into account is the random-walk betweeness defined as the expected number of time a vertex is crossed by an absorbing random walk between nodes s and t, summed over these pairs.

Article by Borgatti[51] draws together many of the possibilities into a general framework for betweeness measures.

betweeness_robustness_problems.jpg

Bias in GP maps

guillefix 21st July 2016 at 3:13pm

Biased random walk

guillefix 27th April 2016 at 1:19am

Brownian motion

Biased random walk. probability distribution is Binomial

Biased_potential.png

guillefix 21st January 2016 at 5:30pm

bifurcation_types.png

guillefix 15th March 2016 at 2:03pm

Bio-inspired computing

guillefix 24th June 2016 at 3:03am

https://en.wikipedia.org/wiki/Bio-inspired_computing

genetic algorithms (Evolutionary computing) ↔ evolution

biodegradability prediction ↔ biodegradation

cellular automata ↔ life

emergent systems ↔ ants, termites, bees, wasps

neural networks ↔ the brain

artificial life ↔ life

artificial immune systems ↔ immune system

rendering (computer graphics) ↔ patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies

Lindenmayer systems ↔ plant structures

communication networks and protocols ↔ epidemiology and the spread of disease

membrane computers ↔ intra-membrane molecular processes in the living cell

excitable media ↔ forest fires, "the wave", heart conditions, axons, etc.

sensor networks ↔ sensory organs


Finite populations induce metastability in evolutionary search ☆

The evolution of emergent computation

Membrane computing

DNA computing

Biochemistry

guillefix 8th July 2016 at 5:56pm

Biodiversity & evolution

guillefix 8th July 2016 at 3:51am

Biography

guillefix 28th June 2016 at 4:30pm

Biological matter

guillefix 3rd June 2016 at 12:12am

Biology

guillefix 8th July 2016 at 7:04pm

The study of life, that includes the most Complex systems known.

Description levels in biology

Systems biology. These levels form a nice hierarchy, but of course interact with and influence each other in crucial ways.

  1. Biosphere
  2. Ecosystem
  3. Organism
  4. Organ
  5. Tissue
  6. Cell
  7. Organelles
  8. Molecules

Levels 1,2: Ecology

Levels 2,3: Biodiversity & evolution

Evolution

Tree of life

  • Archea
  • Bacteria
  • Eukaryotes

add links to children nodes here too. and organize more, etc.

National Center for Biotechnology Information books

Levels 4,5: Organism biology

Developmental biology

Anatomy

Physiology

Levels 6,7: Cell biology

Levels 8: Molecular biology

and Biochemistry.


General methods

Quantitative biology

Using statistics, etc.

http://quant.bio/

Mathematical biology


Biology crash course

https://www.khanacademy.org/science/biology

http://ocw.mit.edu/courses/biology/7-01sc-fundamentals-of-biology-fall-2011/biochemistry/types-of-organisms-cell-composition/


Haloquadratum

http://bionumbers.hms.harvard.edu/

Cause and effect in biology - Ernst Mayr

Biomedical sciences

guillefix 8th April 2016 at 8:27pm

Biomolecule

guillefix 9th July 2016 at 12:14am

Biophysics

guillefix 5th July 2016 at 3:16am

BIOS

guillefix 31st January 2016 at 10:48pm

Upgrading bios

See here and here to upgrade bios of a Dell, like mine. Find the BIOS upgrade file here. Last updated on January 2016

Biotechnology innovation

guillefix 9th April 2016 at 5:32pm

Bipartite Networks

guillefix 31st January 2016 at 11:34pm

Bipartite Networks have two kinds of nodes, and only connections between unlike nodes.

The equivalent of the adjacency matrix is the incidence matrix, BB

It can be converted into a unipartite network by a one-mode projection where two vertices are connected if they both have a connection to the same vertex of the other group (we could improve this by adding a weight: the number of those vertices (groups) they have in common).

This projection generally results in an union of cliques, i.e. completely connected components.

The adjacency matrix of the projection is (after we remove the diagonal components) P=BTBP=B^TB .

One can also have directed bipartite networks (as in Metabolic Networks), and weighted bipartite networks.

Hypergraphs can be represented as bipartite Networks. This is done by mapping the different relations in the hypergraph to a second type of node to which the original nodes can belong by being connected by an edge.

Block decomposition method

guillefix 15th July 2016 at 8:42pm

The block decomposition method (BDM) is an extension of the Coding theorem method to measure the complexity of NN-dimensional arrays. As a Network can be expressed via its Adjacency matrix, which is a 2D array, it can be used to measure Network complexity as well.

Original paper

The measure (which we also call BDM) of complexity of array AA is defined as:

K(A)=(r,u)Adlog2(n)+Km(r)K(A) = \sum\limits_{(r,u) \in \mathcal{A}_{d}} \log_2(n) + K_m (r)

where Ad\mathcal{A}_d is the set with elements (r,u)(r,u) obtained when decomposing the array into non-overlapping subarrays of side length dd. rr is one unique square, and nn is its multiplicity (number of times it appears). KmK_m refers to the estimate of Kolmogorov complexity used in the Coding theorem method. However, for NN-dimensional arrays, one uses NN-dimensional Turing machines, or Turmites. Note that log2(n)\log_2(n) is the number of bits needed to specify the number nn.

In the original paper, a set of 2-dimensional Turing machines was executed to produce all square arrays of size d = 4. This is why the BDM is needed in order to decompose objects of larger size into objects for which its Kolmogorov complexity has been estimated.

The order of the graph nodes in the adjacency matrix is relevant for the complexity retrieved by the BDM. This is especially important in highly symmetrical graphs.

In estimating complexity, it is reasonable to consider that the complexity of a graph corresponds to the lowest KmK_m value of all permutations of the adjacency matrix, as the shortest program generating the simplest adjacency matrix is the shortest program generating the graph.

Normalized BDM

The chief advantage of a normalised measure is that it enables a comparison among objects of different sizes without allowing the size to dominate the measure.

MaxBDM is calculated approximately, as described in the paper.

Implementation

An online implementation and code can be found here

Blockchain

guillefix 23rd June 2016 at 12:43am

Boolean algebra

guillefix 14th July 2016 at 2:20am

A Boolean algebra is an Algebraic structure that models the relations between elements which can be either true or false. It is important in Mathematical logic and in Computer science.

It has the structure of an orthocomplemented, distributed Lattice (algebraic structure).

Boolean network

guillefix 24th June 2016 at 1:34am

Bootstrap percolation

guillefix 15th June 2016 at 4:56pm

http://research.microsoft.com/en-us/um/people/holroyd/boot/

An "infection" process in which nodes become infected if sufficiently many of their neighbors are infected. Related to the Centola-Many threshold model for social contagions.

Bootstrap percolation on spatial networks (see Spatial networks).

Bootstrap Percolation - MathWorld

Bootstrap percolation on the random graph

Borel sigma-algebra

guillefix 15th July 2016 at 2:42am

A Sigma-algebra, B\mathcal{B}, on a set, Ω\Omega, defined as:

B=σ(τ)\mathcal{B} = \sigma(\tau)

i.e. the sigma-algebra generated by τ\tau, which is the set:{all the open sets of Ω\Omega}, i.e. the topology on Ω\Omega. It is the smallest sigma-algebra that contains τ\tau. See here

A Borel measure, is just a Measure on a Borel σ\sigma-algebra. Specifying such a measure is simplified by the Caratheodory extension theorem, that says that to

Botany

guillefix 2nd July 2016 at 5:32pm

See Tree of life

Erodium cicutarium Aguja de pastor.

Their achenes curl upon drying (and also when I sqweezed it with my fingers, probably because humidity.). Seed launch is accomplished using a spring mechanism powered by shape changes as the fruits dry. The spiral shape of the awn can unwind during daily changes in humidity, leading to self-burial of the seeds once they are on the ground. The two tasks (springy launch and self-burial) are accomplished with the same tissue (the awn), which is hygroscopically active and warps upon wetting and also gives rise to the draggy hairs on the awn.

Pepinillos del diablo

Ecballium

The Science of Grapevines: Anatomy and Physiology

Boundary effects on the motion of active colloids

guillefix 17th June 2016 at 5:41pm

Boundaries can steer active Janus spheres. Looks at Catalytic conductor-insulator Janus swimmers. Note that the method of mirror images used in the paper for estimating effects of some rotational diffusion quenching mechanisms is not the same as used for electrostatic charges near a conductor. It is in fact an instance of the method of mirror images, as applied to Diffusion equations, where the image is used to satisfy the no-slip boundary condition in the current JJ of ions. As the current satisfies J=σEJ=\sigma E from Ohm's law, the effect on currents should have an accompanying effect on the Electric field.

Brain

guillefix 8th July 2016 at 2:22am

Brownian motion

guillefix 10th May 2016 at 1:21am

Brownian motion

Brownian Motion: Langevin Equation

Discrete space: random walk

Random walk on 1D lattice

Biased random walk, probability distribution is Binomial

Limits in time variable

A discrete space-time random walk has a standard deviation in position that is proportional to square root of number of steps:

σxa=n=tτ\frac{\sigma_x}{a}=\sqrt{n}=\sqrt{\frac{t}{\tau}}

σx=aτt\sigma_x=\frac{a}{\sqrt{\tau}}\sqrt{t}

Clearly if we want σx\sigma_x to stay finite for a finite tt, we want aτ\frac{a}{\sqrt{\tau}} to stay finite, and we get σxt\sigma_x\propto\sqrt{t} in continuous limit. We also get non-differentiable paths as aτ\frac{a}{\tau}\rightarrow \infty.

Random walk on 2D square lattice. Combinatorics get harder

Solving random walk diffusion on a finite domain with different boundary conditions

Polya's recurrence theorem for random walks

See also probability distribution for random walk (same as for polymer) [For example here or in Soft Matter Physics notes. The probability density at origin goes like 1/Nd/21/N^{d/2} (normalization of Gaussian). One can then sum over all possible lengths of time (i.e. NNs) and get the expected number of times one returns to (a neighbourhood of) the origin (See Note 1 in Probability theory for why). For d=1,2d=1,2, this is \infty, while it's finite for d3d \geq 3. This can be interpreted for a polymer as it being "dense" or "sparse", as summing over NN we are asking the question how many monomers of our very long polymer are close to a given point (say the origin)?

One can also find probability of ever coming back, and this can be related to the expected number of times to come back. This can also be derived heuristically for the asymptotic limit of large times.

First passage time: First passage time calculation using generating functions.. The generating functions also give the {survival probability}, which is the same as {probability of ever coming back}.

Random walk in a graph


Continuous space

Continuous space-time limit from discrete random walk

Diffusion If continuous space and continuous time: Diffusion equation

Can also have continuous space, and discrete time, although not often used.

Phenomenological derivation of Diffusion equation Use Fick's laws of diffusion, and Einstein–Smoluchowski relation

Eistein's original derivation from Chapman-Kolmogorov equation, as Brownian motion is assumed to be a Markov process


Simulate on Matlab

Brownian ratchets

guillefix 26th January 2016 at 7:01pm

The Fokker-Planck equation has a stationary solution, for a biased periodic potential:

A Brownian ratchet occurs when the potential is asymmetric. A particularly nice example is the sawtooth potential, in which the above equation gives:

for the first site.

We find three regimes:

The drift velocity is:

which plotted looks like:

This shows an asymmetry between the positive and negative force regions, similar to that shown in the current-voltage curve of a diode:

In fact ratchets and diodes are very analogous, and ratchets, including Brownian ratchets can be thought of as mechanical diodes. In fact, the famous Feynman Brownian ratchet paradox was formulated by Brillouin in terms of a diode rectifier (Brillouin paradox)

This rectification in Brownian ratchets can be used as a basis for fluctuation-driven transport, which is a proposed mechanism for molecular motors. See here

An example of this is in the tilting ratchet, in which the bias FF used above, oscillates.

Flashing ratchet

Another example of a Brownian ratchet is when the potential U(x)U(x) itself oscillates (fluctuations between a low and a high potential). A special case has the potential turning stochastically between two states, this is known as the flashing ratchet, if one of the two states has no potential, this is called the on-off ratchet.

A way to solve for the probability in the stochastically flashing ratchet is to add a new label to the probability representing one of the two states of the potential landscape, call them ++ and -. Then we get a Fokker-Planck/Master equation for our continuous (space labelled by x\vec{x}) and discrete configuration space:

One can the workout the evolution equation for the total probability of being at position x\vec{x}, and it turns out to have the form of a Fokker-Planck equation with an effective potential.

Building

guillefix 5th July 2016 at 4:12am

Bulk matter

guillefix 3rd June 2016 at 2:41am

Bulk matter refers to a piece of matter composed of sufficiently many building blocks (elementary particles, atoms, molecules, ...) in such a way that a simple statistical description is appropriate, an bulk properties like temperature, Viscosity and elasticity, etc can be defined. These are also called material properties or macroscopic properties.

Bulk matter can be either composed of a single phase or a mixture of phases; see for instance dispersions.

I use the term "form" of matter to refer to a particular type of bulk matter: either a single phase or a mixture of phases. I also use the world material (see Materials science) mostly to refer to a particular form of matter. I think that "phase" is sometimes used more widely in the same sense as I use the term "form".

When is a system bulk matter?

Note that many pieces of matter are formed by components interacting in complicated ways, in such a way that a simple statistical description does not appropriately describe its behaviour for many purposes; for instance, a computer, or a cell. These are in most situations not considered as bulk matter, and should be treated as Complex systems instead. However, whether a statistical description is appropriate, and therefore whether they are considered bulk matter, really depends on the problem, and so for some problems, these systems can be considered bulk matter (for instance, when studying the overall mechanical strength of a computer system). From here on, "matter" refers to bulk matter, unless otherwise specified.

Classification

Bulk matter can be classified depending on whether the phase (see Condensed matter physics for more detailed explanation):

Physics

Mechanics

The Mechanics of most classical types of bulk matter can be macroscopically described via Continuum mechanics, which describes matter in terms of continuum equations, based on space-time varying fields that evolve according to Differential equations.

This is the foundational theory used in Mechanical engineering, and related areas. Continuum equations are also widely used in physics, particularly in Solid mechanics, Fluid mechanics, Soft matter physics, Astrophysics, rheology, etc.

Rheology is a branch of continuum mechanics that studies the flow of matter, primarily in a liquid state, but also as 'soft solids' or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. That is, rheology does not study a particular class of bulk matter, but the flow of any bulk matter.

Thermodynamics

Thermodynamics is the classical theory describing the flow of heat through matter. It is often combined with continuum mechanics to explain phenomena such as convection.

Modern physics: statistical physics, quantum mechanics

There are many more complex phases of matter particularly in soft condensed matter, that go beyond those simply described by continuum equations (although these are still very useful in many of these). Description of these often needs more advanced ideas from Statistical physics.

The microscopic study of matter, as done for example in Quantum condensed matter physics and Statistical physics, also goes beyond phenomenological and macroscopic descriptions of classical physics and tries to derive materials properties from microscopic physics.

Burrows-Wheeler transform

guillefix 1st July 2016 at 2:05am

Business

guillefix 3rd June 2016 at 4:15am

Busy beaver

guillefix 15th July 2016 at 9:33pm

Note busy beavers are often defined just for Turing machines on an input tape which is initially blank.

Applications in Coding theorem method


https://en.wikipedia.org/wiki/Busy_beaver

Understanding proof for Busy Beaver being uncomputable

C/C++

guillefix 13th July 2016 at 9:05pm

The ``Clockwise/Spiral Rule'' for parsing C variable declarations!

Random numbers and probability distributions in C++

rand-Considered-Harmful .. New functions in C++ See minute 15, for example.

Uses this library: http://www.cplusplus.com/reference/random/. rand() considered deprecated for most uses..

C++ tutorial

volatile qualifier

References are nothing but constant pointers in C++ (see here).

See Lynda.com videos.

Calabi–Yau manifold

guillefix 24th June 2016 at 1:31am

Calabi–Yau manifold. is a special type of manifold that is described in certain branches of mathematics such as algebraic geometry. The Calabi–Yau manifold's properties, such as Ricci flatness, also yield applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry.

String theory postulates 10 dimensions. The 6 dimensions have to be small (compactified) so that the space is approximately 4-dimensional. However the shape of the extra 6-dimensions determines the laws of physics (the fundamental laws of particles). The problem, I think, is that it's hard to relate the two, and also that there are so many candidate manifolds..

Calculus

guillefix 25th June 2016 at 3:16pm

a.k.a. infinitesimal calculus

Fundamental theorem of calculus

Vector calculus

Call stack

guillefix 13th July 2016 at 2:36am

The portion of allocated memory of a process, where local variables from functions that are being executed are stored.

https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Mips/stack.html

Note the code of the functions themselves is stored in the text section of the allocated memory, the stack stores the local variables that the function is using, as well as some other things, like function arguments, and return addresses. The lifo property of the stack allows the easy implementation of recursive function calls.

If the amount of space taken by the stack goes over a certain set limit, we get an stack overflow.

https://www.youtube.com/watch?v=HQ3YI70PDe0

Cambrian

guillefix 8th July 2016 at 3:17am

A Geological period of the History of Earth that marks the beginning of the Phanerozoic era, the animal era.

Trilobite

Tuzoia

Sponge (animal)

Laggania

Car

guillefix 7th May 2016 at 2:48pm

Gear box

Caratheodory extension theorem

guillefix 15th July 2016 at 7:45pm

Video

To specify a measure on a Sigma-algebra it suffices to specify it on an Algebra (algebraic structure).

The measure then extends to the sigma algebra generated by that algebra, i.e. to the smallest sigma-algebra containing that algebra. The generation can be done by starting with the algebra and taking intersections and unions, so that it satisfies the axioms of a sigma-algebra. The generated sigma-algebra is unique, if the underlying measure is sigma-finite..

Carbohydrate

guillefix 8th July 2016 at 5:56pm

A carbohydrate is a biological molecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms

The term is most common in biochemistry, where it is a synonym of saccharide, a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups:


https://www.wikiwand.com/en/Carbohydrate

Carboniferous

guillefix 8th July 2016 at 3:23am

A Geological period, with lots of Plants

Cartesian power

guillefix 14th July 2016 at 12:37am

The Cartesian product of a collection of copies of a Set. For instance the Cartesian square is X×XX \times X.

https://en.wikipedia.org/wiki/Cartesian_product#Cartesian_power

Cartesian product

guillefix 7th July 2016 at 6:49pm

An operation between Sets that gives a new set composed of Tuples of elements from the original sets.


http://mathworld.wolfram.com/CartesianProduct.html

https://en.wikipedia.org/wiki/Cartesian_product

Catalan numbers

guillefix 27th June 2016 at 10:34pm

A sequence of numbers that arises in Combinatorics, often of objects defined recursively, like trees. See also Analytic combinatorics

https://en.wikipedia.org/wiki/Catalan_number

Catalan numbers

http://mathworld.wolfram.com/CatalanNumber.html

Catalytic conductor-insulator Janus swimmer

guillefix 17th June 2016 at 1:11am

Category theory

guillefix 28th June 2016 at 4:37am

Cauchy–Riemann equations

guillefix 28th April 2016 at 1:54pm

Celestial mechanics

guillefix 5th July 2016 at 3:16am

Celestial mechanics deals with the motions of celestial objects.

Movements of Earth

Cell

guillefix 8th July 2016 at 6:00pm

Cell biology

guillefix 8th July 2016 at 6:35pm

Cell cycle

guillefix 22nd June 2016 at 4:44am

Cycle of life of a Cell (biology)

Interphase, S phase, metaphase

Cell division

guillefix 25th June 2016 at 9:18pm

Cell membrane

guillefix 1st July 2016 at 10:34pm

Cell organelles

guillefix 22nd June 2016 at 4:44am

Nucleolus where ribosomes (and some signal recognizing molecules) are created

Cytoeskeleton mesh of microtubules and actine filaments along which molecular motors walk

Centrosome organizes microtubules. It's where they originate. I think it's like the seed for their self-assembly

Golgi apparatus packages proteins into membrane-bound vesicles inside the cell before the vesicles are sent to their destination

Endoplsamatic reticulum. Proteins self-assemble inside them. More stuff

This is how proteins are pushed inside the endoplasmatic reticuli while the ribosome assembles them:

........

Cell transport

guillefix 2nd July 2016 at 7:11pm

Cell transport is movement of materials across cell membranes. Cell transport includes passive and active transport:

  • Passive transport does not require energy
  • Active transport requires energy to proceed.

Passive transport proceeds through (simple) diffusion, facilitated diffusion and osmosis.

Non-equilibrium statistical mechanics: from a paradigmatic model to biological transport

Passive transport

Simple diffusion

small non-polar molecules, like

  • Oxygen
  • Carbon dioxide

Facilitated diffusion

Large or polar molecules passing through membrane protein channels. Examples of molecules that need protein channels:

  • Charged ions
  • Glucose

Osmosis

Like water through aquaporins

Active transport

Requires energy, for e.g. in the form of ATP.

Ion channels

https://en.wikipedia.org/wiki/Ion_channel

Sodium-potassium pump

Endocytosis and exocytosis


Cell Transport

https://www.brightstorm.com/science/biology/cell-functions-and-processes/cell-transport/

Cellular automata

guillefix 21st July 2016 at 3:30pm

Complex systems, artificial life in Bio-inspired computing. See Dynamical systems on networks, Discrete dynamical systems

Cellular automaton

Exploring Cellular Automata

Theory of Cellular Automata

Classsification of Cellular Automata

Computer simulations of cellular automata

Automata theory

Statistical mechanics of cellular automata

Equivalence of Cellular Automata to Ising Models and Directed Percolation

Phase Transitions of Cellular Automata See Directed percolation

Statistical Mechanics of Probabilistic Cellular Automata

Universality in Elementary Cellular Automata proves a conjecture made by Stephen Wolfram in 1985, that an elementary one dimensional cellular automaton known as “Rule 110” is capable of universal computation, i.e. it is a Turing machine (see Theory of computation)

Statistical mechanics of cellular automata

Computation theory of cellular automata

A new kind of science - Stephen Wolfram

http://www.paradise.caltech.edu/~cook/papers/index.html

Game of Life Cellular Automata

Elementary cellular automaton (wiki)

Rule 90

Rule 30

Examples

Von Neumann cellular automata are the original expression of cellular automata, the development of which were prompted by suggestions made to John von Neumann by his close friend and fellow mathematician Stanislaw Ulam. Their original purpose was to provide insight into the logical requirements for machine self-replication and were used in von Neumann's universal constructor.

Codd's cellular automaton designed to recreate the computation- and construction-universality of von Neumann's CA but with fewer states: 8 instead of 29.

Conway's Game of Life

Langton's loops consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop

Nobili cellular automata are a variation of von Neumann cellular automata (vNCA), in which additional states provide means of memory and the interference-free crossing of signal.

Brian's Brain

Langton's ant

Wireworld


See also Discrete dynamical systems

http://out.coy.cat/?n=1001010010

http://out.coy.cat/?n=topkek

http://out.coy.cat/?n=nicememe

http://out.coy.cat/?n=welp

http://out.coy.cat/?n=culo

http://out.coy.cat/?n=bra

http://out.coy.cat/?n=megabra

http://out.coy.cat/?n=asdas

http://out.coy.cat/?n=cate wooow sierpinski

http://out.coy.cat/?n=liborio

http://out.coy.cat/?n=cognio

http://out.coy.cat/?n=black

http://out.coy.cat/?n=extropy

http://out.coy.cat/?n=entropy

http://out.coy.cat/?n=XOXO

http://out.coy.cat/?n=doitforthelulz

http://out.coy.cat/?n=rebroff

http://out.coy.cat/?n=topcate

http://cells.coy.cat/

http://cellularautomata.coy.cat/

http://out.coy.cat/?n=1269489990&s=0

http://out.coy.cat/?n=1269489997&s=0

http://out.coy.cat/?n=1269490006&s=0

http://out.coy.cat/?n=1269490035&s=0

http://out.coy.cat/?n=1269490071&s=0

like the matrix: http://out.coy.cat/rndpat.php?n=1269490075&s=0

http://out.coy.cat/?n=1269490127&s=0

http://psoup.math.wisc.edu/mcell/ca_gallery.html

Cellular respiration

guillefix 8th July 2016 at 6:21pm

The way cells produce energy

ATP & Respiration: Crash Course Biology #7

Most of the Biomolecules that gives us energy are processed and end up as Glucose

Glucose + 6 Oxygen –> 6 Carbon dioxide + 6 Water + ATP (Energy)

Cellular respiration stages

Glycolysis

Breaking Glucose into two 3-carbon molecules, called Pyruvic acids, or pyruvate molecules. It also uses 2 ATPs, produces 4 ATPs. It also produces NADH

Uses many enzymes, like phosphoglucoisomerase.

It is an anerobic process, as it doesn't need oxygen. If there isn't oxygen the pyrovates undergo Fermentation. Anaerobic respiration can also produce lactic acid.

However, the next steps in cellular respiration are aerobic and require oxygen.

Krebs cycle

Happens inside inner membrane of the Mitochondria

Pyruvate molecules >> 2 ATP (per glucose) + Energy

First pyruvates are oxydized. One of the three carbons in the chain bonds with 2 oxygens, and leaves as CO2. It leaves a 2-carbon compound called Acetyl coA (Acetyl coenzyme A)

Also an NAD+ picks up an H to form NADH

ATP

Form citric acid, from oxaloacetic acid and the Acetyl coA

There's more.... produce NADH and FADH2

Electron transport chain

On membrane

ATP synthase

Ceramic

guillefix 11th May 2016 at 2:23pm

A ceramic is a rigid material that consists of an infinite three-dimensional network of sintered crystalline grains comprising metals bonded to carbon, nitrogen, or oxygen. (IUPAC)

Note: The term ceramic generally applies to any class of inorganic, non-metallic product subjected to high temperature during manufacture or use.

See also https://en.wikipedia.org/wiki/Ceramic

Chair

guillefix 5th July 2016 at 4:05am

Chair is a piece of furniture with a raised surface, commonly used to seat a single person.

Channel capacity

guillefix 1st July 2016 at 3:33pm

In Data transmission, the channel capacity is defined as

C=maxp(x)I(X;Y)C=\max_{p(x)} I(X;Y)

That is, the maximum mutual information of the conditional probability p above, where the maximization is done over possible proabilities of the outputs (or equivalently, probabilities of inputs). One can show this is equal to the maximum rate of information transfer over a channel such that we can recover the information on the ouput with negligible probability of error.

Note that changing the probabilities of the inputs can be accomplished by choosing different codes to encode the input. Therefore the channel capacity can be considered to be maximizing over codes. In particular:

Channel coding theorem: Long enough code blocks can achieve the channel capacity limits (similar to arguments for understanding entropy by many trials).

The capacity CC is the logarithm of the number of distinguishable input signals.

Channel coding theorem

guillefix 1st July 2016 at 4:15pm

aka noisy-channel coding theorem

In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel, called the Channel capacity.

In other words, the theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R<C there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C.

Long enough code blocks can achieve the channel capacity limits (similar to arguments for understanding entropy by many trials).

See Data transmission

Chaos theory

guillefix 16th June 2016 at 8:33pm

–> See Oxford course notes

Characteristics of chaos

  • Countable infinity of periodic orbits
  • Uncountable infinity of aperiodic orbits
  • Sensitive dependence on initial conditions. See Lyapunov exponents in Nonlinear dynamical systems
  • A dense orbit, which comes arbitrarily close to all periodic orbits.

Chaotic maps

Symbolic dynamics can be used to analyze them

Chaotic Nonlinear dynamical systems

Routes to chaos

Period doubling

Sarkovskii's theorem

Period 3 implies chaos

Intermittency

Blue sky catastrophe


Every chaotic dynamical system is a fractal-manufacturing machine

An Introduction to chaotic dynamical systems. Second edition

Chemical bonds

guillefix 28th April 2016 at 11:16pm
  • Ionic interactions Electronic transfer of electrons between atoms makes them ions which attract via the Coulomb interaction, so 1/r1/r and isotropic. It is 100kBT\sim 100 k_B T. However, interaction is strongly modified in a solution, due to screening (similar to Debye screening in Plasma physics).
  • Covalent bonds. Due to shared electrons that are attracted to both nuclei (or sometimes a few nuclei, in molecules). These are the most common bonds in molecules (see Molecular physics). They are short-ranged and highly directional (anistropic). They are 30 to 100kBT\sim 30\text{ to }100 k_B T at room TT.
  • Metallic bonds, are an special case of covalent bond, where electrons are delocalized over macroscopic regions. Commons in metals.
  • “vibrational” chemical bond. http://www.scientificamerican.com/article/chemists-confirm-the-existence-of-new-type-of-bond/

Chemical engineering

guillefix 2nd July 2016 at 3:51am

Chemical potential

guillefix 2nd July 2016 at 3:30pm

Chemical synthesis

guillefix 4th March 2016 at 1:01am

The synthesis machine !!

The ability to make small organic molecules is at the heart of everything from drug development to the making of new dyes and agricultural chemicals. But ever since the dawn of synthetic organic chemistry in the 1820s, the process has required slow, painstaking effort. Now, however, researchers led by Martin Burke, a chemist at the University of Illinois, Urbana-Champaign, have developed a novel machine that may change all that. The machine automatically synthesizes new small organic molecules by welding together premade building blocks that can be put together in any configuration. Two hundred such building blocks already exist. And thousands of other similar molecules can also be used in the process. As a result, the machine has the ability to make billions of different small organic compounds that can then be tested as new drugs or for other uses. If widely adopted, the synthesis machine could revolutionize organic chemistry, turning it from a slow, painstaking process to a made-for-order business.

Idea for neural network for chemical synethesis and manufacturing etc. Facebook post: https://www.facebook.com/guillermovalleperez/posts/10153853693416223?

Chemistry

guillefix 22nd June 2016 at 5:05am

Atomic structure

Dynamic periodic table

IUPAC nomenclature page Gold book

TED-Ed and Periodic Videos

The Photographic Periodic Table of the Elements

John McMurry, Robert C. Fay-Chemistry, 6th Edition-Prentice Hall (2012)

IUPAC interactive link map

Croning group Very nice research in inorganic biology, evolution, synthesis, and applications..

(Theilheimer's Synthetic Methods of Organic Chemistry ) Alan F. Finch-S Karger Pub (2001)

Random

https://en.wikipedia.org/wiki/Hydrolysis

scents.jpg

http://www.compoundchem.com/2016/05/04/oxidation-reactions-of-alcohols/

Chemotaxis

guillefix 9th June 2016 at 6:51pm

Chemotaxis (from chemo- + taxis) is the movement of an organism in response to a chemical stimulus.

https://en.wikipedia.org/wiki/Chemotaxis

See also Phoretic mechanisms of self-propelled colloids

Chromostereopsis

guillefix 12th July 2016 at 12:23am

Chromostereopsis is a visual illusion whereby the impression of depth is conveyed in two-dimensional color images, usually of red-blue or red-green colors, but can also be perceived with red-grey or blue-grey images. Such illusions have been reported for over a century and have generally been attributed to some form of chromatic aberration.


https://www.wikiwand.com/en/Chromostereopsis

Cinema

guillefix 25th June 2016 at 4:14am

Cinematography & Theatre

guillefix 21st January 2016 at 8:59pm

circle_model.png

guillefix 31st January 2016 at 9:12pm

Circular economy

guillefix 3rd April 2016 at 3:34pm

Civil engineering

guillefix 17th March 2016 at 4:01pm

See Architecture. Relations to society, and societal organization: infrastructure, economy, governance, culture.

Civilization

guillefix 1st July 2016 at 11:12pm

A civilization is any complex society characterized by urban development, social stratification, symbolic communication forms (typically, Writing systems), and a perceived separation from and domination over the natural environment by a cultural elite.

Classical mechanics

guillefix 11th June 2016 at 1:50pm

Classification

guillefix 9th July 2016 at 4:47am

Discriminative Supervised learning, where the output value is discrete, or categorical, or qualitative. No implicit ordering, or closeness on the variables.

Many of the same methods as in regression, as problem is quite similar.

Support vector machines. Software for SVMs: http://svmlight.joachims.org/

How do you classify data that lies on an infinite dimensional space

Supervised classification

Logistic regression

Artificial neural network (see also Deep learning)


Ordered categorical classification

Output has a notion of order, but not closeness, so it's qualitative.


https://www.wikiwand.com/en/Statistical_classification

Cleaning

guillefix 8th July 2016 at 3:15am

An Activity or Process aimed at, or resulting in, making something Clean.

Cleaning tool

guillefix 8th July 2016 at 3:16am

Climate

guillefix 28th June 2016 at 4:25pm

Cloud computing

guillefix 7th May 2016 at 1:35am

Coding theorem method

guillefix 15th July 2016 at 9:33pm

See MMathPhys oral presentation, Algorithmic information theory

Using the coding theorem to estimate the Kolmogorov complexity of short strings. The estimate is defined as:

Km(s)=log2(D(n,k)(s))K_m (s) = -\log_2(D(n,k)(s))

where

D(n,k)(s):={T(n,k):T produces s}{T(n,k):T halts}D(n,k)(s) : = \frac{|\{T \in (n,k) : T \text{ produces } s\}|}{|\{T \in (n,k) : T \text{ halts}\}|}

where (n,k)(n,k) is the set of Turing machines with nn states and kk letters in the alphabet of the input tape. The Turing machines are fed a blank tape, and whether the program halts is determined using a Busy beaver function.

An extension to NN-dimensional arrays has been developed using the Block decomposition method


See this paper and this one For some reason this seems to be a popular idea in Psychology.

Using these methods the people at Algorithmic nature group made The Online Algorithmic Complexity Calculator

More: Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness

Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines

Coding theory

guillefix 4th July 2016 at 11:59pm

A code is a representation of data, represented as an injective map between two sets. These sets are often called the Source alphabet, and the code alphabet, respectively.

Coding theory (and/or coding methods) is the study of codes that satisfy certain properties. These properties are often geared towards Data transmission, Data compression, and other areas in Information theory.

Types of codes

Variable-length code

Codes for transmission/storage reliability

Codes that approach the Channel capacity limit imposed by the Channel coding theorem

See Error-correcting code

Codes for transmission/storage efficiency

Codes that approach the entropy limit imposed by the Source coding theorem for lossless codes, or the limits imposed by Rate compression theory for lossy codes.

See Data compression codes


https://www.youtube.com/channel/UCgzV25kcbpkRMfYSNSgCAFA

https://www.youtube.com/playlist?list=PL2E3F2883347BB45B

Coding_theorem.png

guillefix 19th April 2016 at 7:09pm

Cognitive computing

guillefix 30th June 2016 at 1:39am

http://www.research.ibm.com/cognitive-computing/

That is the promise of cognitive systems–a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition. These systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes.

Cognitive enhancement

guillefix 21st January 2016 at 8:50pm

Cognitive science

guillefix 11th June 2016 at 2:08pm

Collective behaviour of active colloids

guillefix 18th June 2016 at 1:22am

See Active colloid, Self-diffusiophoresis, Self-propelled particle. See also Collective hydrodynamics of active entities, Self-assembly of active colloids

Recent review: Emergent behavior in active colloids

Dynamic self-organization of motile components can be observed in a wide range of length scales, from bird flocks (ref) to bacterial colonies (ref, ref) and assemblies of motor and structural proteins (ref). The fascination with these phenomena has naturally inspired researchers to use a physical under-standing of motility to engineer complex emergent behaviors in model systems that promise revolutionary advances in technological applications if combined with other novel biomimetic functions, such as signal processing and decision making (see Swarm robotics), or replication (see Self-replication of information-bearing nanoscale patterns).

Biological components pose inevitable limitations on this task, while chemical [ 14 ], mechanical [ 15 ], or externally actuated [ 16 ] imitations appear more promising

Individual and collective behavior of artificial swimmers: "Janus particles"

Transport and Collective Dynamics in Suspensions of Confined Swimming Particles

Emergent, Collective Oscillations of Self-Mobile Particles and Patterned Surfaces under Redox Conditions

Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids

Collective behaviour of thermally active colloids. This model doesn't consider the dependence of the interaction on the relative orientation of the colloids. This effect is incorporated in their later model described in the paper on chemotactic colloids, and on optically driven thermally active colloids

Clusters, asters, and collective oscillations in chemotactic colloids

Phoretic mechanisms of self-propelled colloids

Behaviour of a single chemotactic colloid in an external substrate concentration gradient

Theory of phoretic mechanisms of self-propelled colloids

Collective behaviour

FROM CHEMOTAXIS TO COLLECTIVE MOTION

Effective interactions of active colloids

They consider the former in the paper, and look at pairwise interactions.

Stochastic equations of motion

Constructing a Langevin equation using the drift terms derived in here, which depend on

note product gradient: gradient in product concentration. Extra terms were added because the coefficients Φ0,α0\Phi_0, \alpha_0, etc. only take an external ss gradient into account, and now we also have external pp gradients produced by the other catalytic colloids.

These equations can also be derived phenomenologically following from symmetry principles (see citations in paper), but one doesn't get expressions for the coefficients.

Concentration fields

The substrate and product fields (ss, and pp) are themselves determined by the distribution of colloid positions and orientations. The substrate is consumed and the product is generated at the rate

Q(r,t)=κ(s)αXαδ(rrαXα)σ(Xαn^)Q(\mathbf{r}, t) = \kappa(s) \sum_\alpha \int _{|\mathbf{X}_\alpha|} \delta (\mathbf{r} - \mathbf{r}_\alpha -\mathbf{X}_\alpha) \sigma(\mathbf{X}_\alpha \cdot \hat{\mathbf{n}})

Evolution equation of concentration fields is obtained depending on averaged colloid number density and orientation density. The steady state is the considered, and the Fourier transform is applied to obtain information on the length scale of the interaction, expressed in the screening length.

Saturated vs unsaturated regime. MM curve? Refers to Michaelis-Menten rule in Enzyme kinetics. Saturated and unstaturated regimes refer to regimes where κ(s)\kappa(s) (which has the MM form) is saturated vs unsaturated.

Colloid number density ρ\rho and orientation density w\mathbf{w} (averaged equations)

These are obtained from the Langevin equations above. For the orientational equations, the averaged equation involves higher moments, and a closure condition needs to be imposed to express them in terms of lower moments (mean field approximation). See Supplementary Material in paper.

The equations for ρ\rho and w\mathbf{w} depend on the gradients of ss and pp, while the equilibrium ss and pp fields depend on ρ\rho and w\mathbf{w}. The two equations can be combined to obtain closed equations for ss and pp with complicated effective interactions, which give rise to a rich diversity of possible phases, depending on the several parameters in the model. The main two regimes are:

Unsaturated

Saturated

Formation of asters (i.e. star-like formations, I think)...

Collective behaviour of thermally active colloids

guillefix 10th June 2016 at 4:12pm

Collective behaviour of active colloids

Collective Behavior of Thermally Active Colloids

This model doesn't consider the dependence of the interaction on the relative orientation of the colloids. This effect is incorporated in their later model described in the paper on chemotactic colloids, see here. It is also incorporated on their paper on optically driven thermally active colloids. See below.

Thermal interactions

via self-generated temperature gradient (via half-coating of a dark absorbing material and laser radiation bathing the sample), and the Soret effect, also known as Thermophoresis

Fokker-Planck description

Regimes

Depend on the Soret number


Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids

Brownian dynamics simulation of self-thermophoretic colloids. The colloids don't have an intrinsic asymmetry, but there is an asymmetry in their produced temperature fields because of non-uniform illumination of the light-absorbing (dark) colloid. The illumination is assumed to be directed from above downwards, and the effects of shadowing by the colloids above a particular colloid are taken into account using simple geometric optics (as a better optics treatment using light scattering on the particles is computationally very intensive).

Comet-like swarms are formed, with interesting dynamic features, like internal circulation of particles in the swarm, evaporation, and ejection of hot particles from the tip. The high-density head region forms a hot core which pulls the tail of the comet along.

It also drives thermal and density fluctuations. The particles at the top have the largest self-propelling velocity vsv_s so that they tend to move up. They are also pulled down by the large hot core (creating large temperature gradients) below them. This interplay of effects cause larger fluctuations than one would expect in an equilibrium system (Δρ/ρ\Delta \rho / \sqrt{\rho}). Density fluctuations at the swarm tip and temperature fluctuations are intertwined due to the transient appearance of heat sources.

vT\vec{v}_T is the drift velocity of the thermophoretic attraction due to the far-field temperature field gradient created by a particle causing a thermophoretic response on the other (here we assume Soret coefficient is negative so that they attract (particle climbs up TT gradients)).
vs\vec{v}_s is the self-thermophoretic drift velcotiy due to the particle interacting with the temperature gradient on its surface created by its own non-uniform illumination.

The swarm is a long-lived but transient structure; it is subject to a slow leakage that eventually dissolves it. It looses particles linearly with time

Velocity of swarm

If there are approximately NhN_h particles in the head, and NtN_t particles in the tail, then the whole swarm experiences approximately a self-thermophoretic drift velocity (due to the external illumination) of v0Nhv_0 N_h because approximately, only the particles at the head are illuminated. Now the drift for a single particle is v0=f/ηv_0 = f/\eta where f is the self-thermophoretic force and η\eta is the drag coefficient. Now the whole swarm experiences a force NhfN_h f. However, because there are Nh+NtN_h + N_t particles in the swarm, its effective drag coefficient is Nh+NtN_h + N_t times the drag coefficient of a single particle, i.e. (Nh+Nt)η(N_h + N_t) \eta. Therefore the drift velocity of the swarm, Vswarm=Nhf(Nh+Nt)η=Nhv0Nh+Nt=Rv0R+RV_{\text{swarm}} = \frac{N_h f}{(N_h + N_t) \eta} = \frac{N_h v_0}{N_h + N_t} = \frac{R_\perp v_0}{R_\perp + R_{||}}. The last expression comes from estimating the ratio of the number of colloids in the head and in the tail from its shape as Nh/RNt/RN_h/R_\perp \sim N_t / R_{||}, where RR_\perp is the width (radius perpendicular to the light source), and RR_{||} is the length, or height of the swarm.

Collective hydrodynamics of active entities

guillefix 3rd June 2016 at 12:13am

See Active matter

Continuum equations of motions for dense active nematics, such as suspensions of microtubules driven by molecular motors, or dense collections of microswimmers. They are described as nematic Liquid crystals, with an extra term in the term that leads to instability (typical of Non-equilibrium statistical physics), and active turbulence. See Complex fluid dynamics for the dynamical equations of liquid crystals (Beris-Edwards equations).

Addition to stress, is also discussed in Complex fluid dynamics. However, Is there an easier way to see this? The contribution to stress from the active colloids is the average value of the stresslet, which for nematic active particles, turns out to be Πjk=ζQjk\Pi_{jk} = -\zeta Q_{jk}, the active stress, where ζ\zeta is a measure of the level of activity.

Note: the velocity of the swimmer doesn't appear in the equations because the velocity of the swimmer determines the velocity of the fluid at the first instant when the swimmers start, and from there on, the velocity of the fluid=the velocity of the swimmer, and it just evolves according to the stresses described on the paper and here. So yeah the fact that they actually swim, only sets up their initial velocities, and from there on, they are equivalent to {rods with symmetric thrust, say two thrusters, one at each end}. However, for other active nematics, like Molecular motors+Microtubules mixtures, the "symmetric pusher" model is reasonable even at the short accelerating phase.

Therefore because the RHS of the momentum equation has iΠjk\partial_i \Pi_{jk}, changes in the direction of orientation of the nematic order induces flow. From these considerations and looking at the induced flows, one can already find two examples of instabilities:

  • extensible systems are unstable to bend perturbations. See a derivation in Soft matter physics notes.
  • contractile systems are unstable to splay perturbations

Also activity can stabilize or de-stabilize nematic ordering, depending on the kind of activity and shape of particles:

  • for elongated particles, extensible flow stabilizes nematic order, while contractile flow destabilizes it.
  • for plate-like particles, contractile flow stabilizes nematic order, while extensible flow destabilizes it.

These can be understood from the pictures in Figure 7 in article, reproduced below:

Fluctuating hydrodynamics and microrheology of a dilute suspension of swimming bacteria

Collective hydrodynamics of active entities: applications

guillefix 1st May 2016 at 7:08pm
  • active turbulence
  • interactions between topological defects, walls (regions of high bend perturbation), and flows (jets, and vortical).
  • velocity-velocity correlation length independent of activity.
  • Application: microtubules and Molecular motors (kinesin). Motor clusters attach to tubes creating bridges between them, and sliding them relative to each other. Adding PEG beads gives depletion interaction, and makes tubes come together into bundles, enhancing the effects of their relative sliding.
  • ... Some doubts about some of the applications, and results.
  • Lyotropic active nematics Lyotropic refers to concentration dependent effects, and in particular we look at the evolution of localized patches of active material surrounded by iosotropic fluid. Evolution governed by Cahn-Hillard equation, with free energy, and extra stress, described in Biphasic, Lyotropic, Active Nematics
  • Active anchoring Anchoring of active nematics at interfaces, due to their activity alone.

Collisional plasma physics

guillefix 29th January 2016 at 12:58am

Collisionless plasma physics

guillefix 29th January 2016 at 12:57am

Colloid

guillefix 1st July 2016 at 10:48pm

A colloid is most often used to refer to either:

  • a colloidal dispersion, or
  • a colloidal particle

Colloidal: State of subdivision such that the molecules or polymolecular particles dispersed in a medium have at least one dimension between approximately 1 nm and 1 μm, or that in a system discontinuities are found at distances of that order. (IUPAC)

Colloid: Short synonym for colloidal system. (IUPAC)

A colloidal dispersion is a system in which particles of colloidal size of any nature (e.g. solid, liquid or gas) are dispersed in a continuous phase of a different composition (or state). The "name dispersed" phase for the particles should be used only if they have essentially the properties of a bulk phase of the same composition.

(http://goldbook.iupac.org/C01174.html)

Colloid physics

Branch of Physics dealing with physical properties of colloidal systems (i.e. motion, forces, etc. at the scale of the colloidal system).

See book by Hunter - Foundations of colloid science

Colloid physics

guillefix 9th June 2016 at 5:44pm

Branch of Physics dealing with physical properties of colloidal systems (i.e. motion, forces, etc. at the scale of the colloidal system).

The branch of soft matter dealing with colloids has a close connection with the other subjects of Condensed matter physics, like Solid-state physics. This is because colloidal Suspensions in many ways can behave analogously to solids, whether crystalline or glassy. Colloidal particles have also been used as model systems for atoms or molecules, and so there are some connections with Atomic physics and Molecular physics.

Microhydrodynamics of colloids

Phoretic mechanisms of colloids

These are important in Active matter (see Active colloid), in Biophysics, and Nanotechnology.

Colour

guillefix 14th July 2016 at 6:06pm

Combinatorial game theory

guillefix 13th June 2016 at 7:56pm

See book by Conway

Example: Dots and boxes

Combinatorics

guillefix 26th June 2016 at 5:11pm

Common features of GP maps

guillefix 20th April 2016 at 11:53pm
  • Redundancy. Many genotypes per phenotype.
  • Bias. Highly biased distribution of genotypes per phenotype, i.e. some phenotypes have many more corresponding genotypes than most phenotypes.
  • Negative correlation of genotypic robustness and evolvability. This is intuitive because genotype have a fixed degree in the mutational network (with edge corresponding to one-point mutation in genotype). Therefore if if you are connected to many genotypes in the same neutral space (high robustness), you have few possible connections left, and fewer available phenotypes and thus low evolvability. This assumes that genotypes with low robustness aren't just connected only to a few genotypes, but don't have "preference" over genotypes in other neutral spaces.
  • Phenotypic robustness and evolvability are positively correlated. This is because phenotypic robustness correlates positively with neutral space size. A large neutral space means that the phenotype is effectively connected with more genotypes and thus often more phenotypes than phenotypes with small neutral spaces.
  • Shape-space covering: one can reach most phenotypes from a single phenotypes with few mutations. This is indicative of the large interconnectivity of the space.
  • A roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. I.e. phenotypes with large neutral spaces are more robust.

Defined precisely, genotypic robustness is the fraction of neutral mutations per genotype, and genotypic evolvability is the number of distinct phenotypes that are within one mutation of the genotype (and are not the same phenotype as that of the genotype). By contrast, phenotypic robustness is defined as the average fraction of neutral mutations per genotype across a given phenotype. This correlates positively with phenotypic evolvability, defined as the total number of distinct other phenotypes that are within one mutation of any of the genotypes belonging to the given phenotype.

Communication

guillefix 5th July 2016 at 4:14am

Communication channel

guillefix 1st July 2016 at 3:11pm

A communication channel is a system in which the output depends probabilistically on the input.

The probability transition matrix, for a given channel is the conditional probability of output X given input Y, p(X;Y).

Communication complexity

guillefix 5th July 2016 at 4:14am

Communication system

guillefix 1st July 2016 at 3:14pm

A communication system, as studied in Communication theory is specified by:

Communication theory

guillefix 2nd July 2016 at 1:47am

See Information theory, Data transmission

Communication theory studies the properties of Communication systems

Source-channel separation theorem

Properties of communication systems

Properties of information source

Entropy rate

Properties of data transmission system

See Data transmission

Properties of destination

Community

guillefix 8th April 2016 at 5:36pm

Community structure in networks

guillefix 26th February 2016 at 12:34am

Compact space

guillefix 14th July 2016 at 3:30am

A Topological space XX is compact if every Filter base B\mathcal{B} on XX has an accumulation point. That is, there exists xXx \in X such that

for all NN(x)N \in \mathcal{N}(x), for all ABA \in \mathcal{B}, NAN \cap A \neq \emptyset.

An alternative, well-known definition involves properties of ‘coverings’ of X by families of open sets.

Comparative anatomy

guillefix 8th July 2016 at 6:57pm

Study of similarities and differences between the anatomy of different organisms

Comparative Anatomy: What Makes Us Animals - Crash Course Biology #21

Complex analysis

guillefix 31st January 2016 at 1:01am

Complex dynamics

guillefix 22nd May 2016 at 3:41pm

Complex fluid dynamics

guillefix 3rd June 2016 at 12:13am

Complex fluids are fluids with elements (mostly objects suspended in the fluid), whose dynamics couple with the fluid's dynamics, giving a more complex overall behaviour (see wiki page). Most important types are dispersions, so that they are composed of two coexisting phases. The main types are:

See Active matter, for the interesting and important type of complex fluid, composed of active or driven elements.

Notes from Paul Dellar's course his website

• Low Reynolds number hydrodynamics, general mathematical results, flow past a sphere. Stresses due to suspended rigid particles. Calculation of the Einstein viscosity for a dilute suspension

Stresses due to Hookean dumb-bells. Derivation of the upper convected Maxwell model for a viscoelastic fluid. Properties of such fluids

• Suspensions of orientable particles, Jefferys model, very brief introduction to active suspensions and liquid crystals

Dynamic theory of nematic Liquid crystals

Classical models for nematodynamics, dynamics of nematic liquid crystals:

  • Ericksen-Leslie Theory, in terms of the director field n\mathbf{n}
  • Beris-Edwards model, in terms of the full tensorial order parameter Q\mathbf{Q}, thus the model is more detailed.

Doi theory?

See also Soft matter physics notes

See Beris A.N. and Edwards B.J., Thermodynamics of Flowing Systems (Oxford University Press) 1994., and I think Doi also has a book on this. See also here, and here.

Beris-Edwards equations

Contiuum equations of motion of nematic Liquid crystals, in terms of the tensorial order parameter Q\mathbf{Q}.

See The Hydrodynamics of Active Systems.

Suspension dynamics

A physical introduction to suspension dynamics - Guazzelli, Morris

Fluid dynamics of fluid with suspended particles.

The suspended particles will (after a short transient) follow the fluid in its translation, and rotation. However, they can't follow it in its strain deformation. Therefore the strain component of the externally imposed flow finds resistance in the suspended particles (spheres for example), and this resistance means the particles disturb the flow. Because the flow determines the stress tensor, they will affect the stress tensor. In particular, the way the suspended sphere will affect the stress tensor is encoded in the stresslet.

Einstein derived the Einstein viscosity through dissipation arguments. Part of these is also found in the book. Note that the dissipation is basically the integral of the stress times the strain rate σ:e\mathbf{\sigma}:\mathbf{e}, and is derived in Chapter 1.

There are steps in the derivation, that I don't yet quite follow

Complex geometry

guillefix 24th June 2016 at 1:32am

Complex systems

guillefix 21st July 2016 at 3:29pm

A complex system is a high-dimensional systems where the variables are strongly interdependent.

See Complexity theory for more discussion on the definition of a complex system.

–>Map of complexity science

Complex Systems (Mathematics Course at Oxford) blog

https://en.wikipedia.org/wiki/Complex_system https://en.wikipedia.org/wiki/Complex_systems

Features of complex systems: Self-organization and emergence. Evolution. Adaptation. Homeostasis, autopesis. Crowd-dynamics, chaos, order and disorder..

Related: Soft matter physics. Non-equilibrium statistical physics

Models and examples: Boolean network, Automata, Cellular automata, Biology, Artificial chemistry, Fractals. Control theory and control systems, Nonlinear systems, Networks (in particular Dynamical systems on networks), Social system, Percolation, Self-organized criticality, Agent-based models

Maybe try to categorize these a bit.

Interesting idea about emergence and complex systems: Sloppy systems

Complex Networks and Energy Landscapes See Network theory.

Diffusion-limited aggregation

Percolation

Netlogo models library


ChaosBook.org videos also here YB channel

Synergetics

Chaos book

Complex Systems: A Survey

Methods and Techniques of Complex Systems Science: An Overview

Power-law Distributions in Empirical Data

Statistical physics of social dynamics

Part 1 Symbolic Dynamics and One-dimesional Cellular Automata: an Introduction Лекториум

http://www.complex-systems.com/

https://theory.org/complexity/

https://en.wikipedia.org/wiki/Homeostasis

https://www.youtube.com/user/StanfordComplexity/feed

Computation, Dynamics and the Phase-Transition

http://www.maths.qmul.ac.uk/research/applied

ABDUS SALAM MEMORIAL LECTURE SERIES

Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC)

Hans J. Herrmann

Researchers in complex systems or here actually

Double pendulum android app

How do I explain to non-mathematical people what "non-linear and complex systems" mean?

Computational Methods for Nonlinear Systems

http://sethna.lassp.cornell.edu/

http://cosnet.bifi.es/

p. grassberger

Complexity

guillefix 21st July 2016 at 3:30pm

See Complexity theory for definition and theory. See Complex systems for examples and applications.

Complexity is a general concept that has different meanings in different contexts. For instance, complexity is related to “incompressibility” in information theory and computer science. In dynamical systems, complexity is usually measured by the topological entropy and reflects roughly speaking, the proliferation of periodic orbits with ever longer periods or the number of orbits that can be distinguished with increasing precision. In physics, the label “complex” is in principle attached to any nonlinear system whose numerical solutions exhibit a chaotic behavior. Neurologists claim that the human brain is the most complex system in the solar system, while entomologists teach us the baffling complexity of some insect societies. The list could be enlarged with examples from geometry, management science, commu- nication and social networks, etc.

from book on Permutation complexity by Amigo

Complexity measures

Descriptional complexity

Computational complexity


Shannon entropy: a rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics

Good review: RANDOMNESS, INFORMATION, AND COMPLEXITY

Information and Complexity Measures in Dynamical Systems

See also Information theory, Statistical physics, Dynamical systems, Evolution, Simplicity bias.

Complexity measures based on data compression

guillefix 7th July 2016 at 7:15pm

See Descriptional complexity, Data compression

Some measures of descriptional complexity are based on Data compression techniques, like the Lempel-Ziv complexity.

Relations to Grammar-based compressions used in Data compression

Application of Lempel–Ziv factorization to the approximation of grammar-based compression Relations between LZ-factorizations and grammar-based factorizations (G-factorization). The G-factorization gives an upper bound for the LZ complexity.

See this book too (same article), and this: Grammar Compression, LZ-Encodings, and String Algorithms with Implicit Input.

Complexity theory

guillefix 8th July 2016 at 1:32am

Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways. Etimologically, complex refers to a system made of many intertwined parts, and that's still the definition we use in science, although a precise measure hasn't been agreed upon. See Complexity.

But how intertwined, i.e. how many and what kind of the interactions does a system need to be called complex? I think a complex system should be defined as one in which the interactions significantly alter the behaviour of the system, relative to the one with no interactions. The primary example of interactions that qualitatively affect the behavior of a system are nonlinear interactions (see Nonlinear systems).

Definition by Cosma Shalizi: a complex system is a high-dimensional systems where the variables are strongly interdependent. Complex systems are ones with a large effective number of strongly-interdependent variables. This excludes both low-dimensional systems, and high-dimensional ones where the variables are either independent, or so strongly coupled that only a few variables effectively determine all the rest. Since the 1980s, an interdisciplinary movement of physicists, mathematicians, economists, computer scientists, biologists, anthropologists and other scientists has explored techniques for modeling a broad range of such systems, and their common features and inter-connections. These techniques rely heavily on intensive, sophisticated computer simulations, and notions of information, search and adaptation feature prominently in the theories. (The Statistical Analysis of Complex Systems Models)

See also: How do I explain to non-mathematical people what "non-linear and complex systems" mean?

Furthermore, Warren Weaver posited in 1948 two forms of complexity:

  • disorganized complexity
  • organized complexity

The way I interpret this, is that organized or disorganized refers to the behaviour of the system, at some scale and coarse-graining level. If the system at some coarse-graining level has a behaviour that could be described by a less complex system (for example, as formalized by Kolmogorov complexity in AIT) than the original description, we say it displays organized complexity, and that new simpler behavior has emerged (see Self-organization). This may also be called complexity reduction. One can see that coarse-graining will produce less complex descriptions, pretty much by definition. However, to get emergence, the system must allow some coarse-graining procedure that produces reasonable descriptions, in the first place

Disorganized complexity refers to some scale which does not allow a simpler coarse-grained description.

For example, a gas of particles represent a complex system (as the particles interact with each other in complex ways, i.e. ways that change the behavior of the system significantly relative to a system of non-interacting particles). At the scale of particles, we have disorganized complexity, as there is no coarse-grained description that can simplify the dynamics while still talking of all the particles. We may then use probabilistic descriptions. At larger scales, we can talk about large groups of particles, and using, for instance averages from the probabilistic descriptions, we can construct coarse-grained descriptions in terms of "infinitesimal" volume elements interacting on less complex ways. We can say that that "hydrodynamic behvaior has emerged".

Actually here I am referring to "complexity" as used in Complex systems theory. As Wiki says, Complexity theory can also refer to Computational complexity or Descriptional complexity (a fundamental concept in Algorithmic information theory).

Complexity and Self-organization

Universality-Complexity Classes for Partial Differential Equation Systems (from xmorphia) Taking ideas of universality and complexity classes of cellular automata from Stephen Wolfram (c.f. A New Kind of Science).

https://en.wikipedia.org/wiki/Complexity_theory

Kolmogorov Complexity – A Primer

The First Law of Complexodynamics

Well the complexity follows that pattern in the macroscale at least. Also:

Non-equilibrium is more complex; I think: because equilibrium can be described simply: the long time behaviour of the simple dynamical system; while non-eq has many more possibilities

https://jeremykun.com/2012/04/21/kolmogorov-complexity-a-primer/

See also the related: Computational complexity, and also Descriptional complexity, and Complex systems.


Complexity theory may be seen as part of complexity science, or they may be seen as equivalent disciplines. In any case, this page includes complexity science.

http://www.complexity.ecs.soton.ac.uk/


People

http://turing.iimas.unam.mx/~cgg/

Norbert Wiener, cybernetics

William Ross Ashby

Stuart Kauffman

Heinz von Foerster, Second-order cybernetics

Francis Heylighen, cyberneticist


Introduction to Circuit Complexity

Structural Complexity I

Structural Complexity II

complexity_transducers.png

Component (Graph theory)

guillefix 4th February 2016 at 2:06pm

A component is a subset of the network for which all pair of vertices have at least one path, and which is maximal (i.e no extra nodes can be added that preserve this property). A connected graph has only one component, while a disconnected one has more than one.

The adjacency matrix can always be written in block diagonal form with blocks corresponding to components.

Components in directed networks

Weakly connected components are components of a directed network ignoring the direction.

Strongly connected components have a path between any two vertices in both directions.

Acyclical graphs can't have strongly connected components with >1 vertex since, they would necessarily include a cycle.

Out-components are all the vertices reachable from a certain vertex, including the vertex itself.
In-components are all the vertices from which one can reach a certain vertex, including the vertex itself.

Both of these are identical for all vertices in a strongly connected component.

Composite material

guillefix 21st July 2016 at 12:54am

Compressible flow and waves

guillefix 29th January 2016 at 12:57am

Computability theory

guillefix 28th May 2016 at 2:25am

See Theory of computation

A formal language is a set of strings of symbols that may be constrained by rules that are specific to it.

Σ\Sigma^* is the set of strings formed by symbols in the set Σ\Sigma.

From Naïve Set Theory - Cardinality & Basic Computability Theory:

Definition 1.2.1. A one-way infinite, 2-tape Turing Machine is....

A configuration of the Turing machine consists of the state, the contents of the 2 tapes, and the position of the tape heads.

An input string ww is said to be accepted by a Turing machine MM if, the computation of MM with initial configuration having ww on the first tape and both heads at the left end of ww, terminates in qaq_a, the accepting state.

The machine is said to reject the string if the Turing machine terminates in qrq_r, the rejecting state.

(There is of course, the possibility that the Turing Machine may not terminate its execution.)

a Turing machine is said to accept a language L if every string x in the language is accepted by the Turing Machine in the above sense, and no other string is accepted

A language L is said to be decidable if both LL and LcL^c are acceptable.

Definition 1.2.2. A language is said to be acceptable if there is a Turing machine which accepts it.

Definition of computability

I suppose f(x)f(x) \uparrow means unbounded..

Useful definitions: bit-doubling function, pairing function. The pairing function is a prefix code - that is, the encoding of a pair cannot be the prefix of the encoding of another pair. See Prefix code. This makes the code uniquely decodable: a pair can be identified without requiring a special marker between pairs.

Theorem 1.2.10: A language is computably enumerable if and only if it is acceptable.

Theorem 1.2.11: A language is decidable if and only if it is computably enumerable in increasing order. That is, a language LL is decidable if and only if it is finite or there is a total computable bijection f:NLf: \mathbb{N} \rightarrow L such that for all numbers nn,

f(n)<f(n+1)f(n) < f(n+1)

Theorem 1.2.12. Every infinite computably enumerable set contains an infinite decidable set.

See Computational Complexity problem sheet solutions offline version. Also see these notes on Kolmogorov complexity, for proof of Theorem 1.2.12. and more.

Universality theorem: There is a universal Turing machine.

Kleene's normal form theorem. There is a 3-ary partial computable function CC and a 1-ary partial computable function UU such that any 1-ary partial recursive function can be expressed as

fe(n)=U(μz[C(e.n.z)=0])f_e(n) = U(\mu z[C(e.n.z) = 0])

Theorem 1.2.15 There is a partial computable function that is not total computable.

Halting problem

http://cstheory.stackexchange.com/questions/2853/are-there-any-proofs-the-undecidability-of-the-halting-problem-that-does-not-depe/2911#2911

Computational biology

guillefix 4th March 2016 at 12:27am

https://moleculamaxima.com/

Create new exciting organisms with just a few lines of code Extend nature and develop new drugs with the Synthetic™ bio-programming language and the Cytostudio™ IDE

SyntheticTM

Looks awesome!


Computational chemistry

guillefix 8th March 2016 at 6:46pm

Computational complexity

guillefix 11th July 2016 at 7:43pm

Algorithmic or computational complexity

The computational complexity of an algorithm is an asymptotic estimate of how the algorithm's running time scales with the size of its input.

https://en.wikipedia.org/wiki/Computational_complexity_theory

https://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html

Time complexity

Pseudo-polynomial time: if its running time is polynomial in the numeric value of the input, but is exponential in the length of the input – the number of bits required to represent it. That is because numerical number nn is related to number of bits (digits in binary), bb, by n=2bn=2^b.

P vs. NP and the Computational Complexity Zoo

Kolmogorov complexity

See Algorithmic information theory


https://www.wikiwand.com/en/Structural_complexity_theory

Computational mathematics

guillefix 4th April 2016 at 11:36pm

Computer

guillefix 5th July 2016 at 3:33am

Computer aided engineering

guillefix 7th May 2016 at 1:33am

Computer aided engineering (CAD)

Computer algebra

guillefix 12th July 2016 at 6:12pm

https://en.wikipedia.org/wiki/Computer_algebra_system

http://epubs.siam.org/doi/book/10.1137/1.9781611971033

http://homepages.math.uic.edu/~jan/mcs320/

Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer)

Joel Cohen - Computer algebra and symbolic computation books

Intelligent computer algebra system: Myth, fancy or reality?

CASs

Sage/numpy/sympy... Matlab. Mathematica. Maple. Maxima/Macsyma. GAP. Axiom.

Web notebook: IPython and Jupyter notebook http://jupyter.readthedocs.org/en/latest/running.html

Torch + IPython = iTorch: https://github.com/facebook/iTorch

Computer algebraic geometry

Computer linear algebra

Basic Linear Algebra Subprograms

http://www.openblas.net/

LAPACK 

Other computer algebras

Computer algebra system

guillefix 20th June 2016 at 5:44pm

Computer architecture

guillefix 29th June 2016 at 3:37am

Computer engineering

guillefix 29th June 2016 at 3:37pm

Computer graphics

guillefix 17th July 2016 at 11:23pm

Computer hardware

guillefix 3rd April 2016 at 2:24pm

GPUs

For Deep learning for example

Best GeForce GPU: GeForce Titan X. Titan Z coming soon

More CUDA GPUs

ThinkMate computer with many GPU customization options.

Computer networks

guillefix 8th May 2016 at 10:19pm

Computer science

guillefix 14th July 2016 at 2:55pm

Computer science can refer broadly to Computer Science and IT, or more specifically to Theoretical computer science

Computer Science and IT

guillefix 3rd July 2016 at 4:58am

Computer science and Information technology (IT).

Computer science is what came out of asking: what kind of maths can actually be effectively carried out in the physical world?

Portal:Computer science

Portal:Information technology

Information technology is the result of actually carrying out this math, an step that required technology.


http://colorfulengineering.org/SCICOMP.html

http://www.aduni.org/courses/

Nice Math ∩ Programming blog: https://jeremykun.com/

http://it-ebooks.org/

http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/

Quantum random number generator: https://qrng.anu.edu.au/

Github

Computer vision

guillefix 21st June 2016 at 4:16pm

Concentration around a self-diffusiophoretic particle

guillefix 9th June 2016 at 6:23pm

See Self-diffusiophoresis.

At steady-state, in the reference frame of the object, and neglecting distortions induced by the flow (small Peclet number), the solute concentration in the liquid is given by

D2c=0D\nabla^2 c = 0 (steady state diffusion)

Dnc(rs)=α(rs)-D\mathbf{n}\cdot\nabla c(\mathbf{r}_s) = \alpha(\mathbf{r}_s)

i.e. the flux of solute is given by some space-dependent function that measures the 'surface activity' at the surface of the colloid, i.e. the generation or consumption of solute by a chemical reaction. In general, describing this process involves additional coupled transport problems for other species involved in the surface reactions.) Some variations are needed for the cases of Self-electrophoresis and Self-thermophoresis. Approximately, these equations give, Vαμ/DV\sim \alpha \mu /D. In particular, once the surface properties, μ\mu and α\alpha and the shape are given, the velocity turns out to be independent of the size RR of the object, showing that this method of propulsion is robust under downscaling.

Concept

guillefix 8th July 2016 at 2:37am

A certain structure in the Mind that represents a Set of Objects, often by representing a property that defines the set.

http://plato.stanford.edu/entries/concepts/

Classical theory

Prototype theory

Theory theory

Concrete mathematics

guillefix 27th June 2016 at 10:46pm

https://en.wikipedia.org/wiki/Concrete_Mathematics. the topics in Concrete Mathematics are "a blend of CONtinuous and disCRETE mathematics." The term "concrete mathematics" also denotes a complement to "abstract mathematics". (by Donald Knuth, author of LaTeX!!)

See AugMath.

These ideas of my mathematical philosophy are also brought to life in Iconic mathematics (maths that looks like what it means):

Symbols ask us to think. Icons ask us to look.

The symbol 5 tells us nothing about five. The icon ||||| is five.

More: http://www.wbricken.com/htmls/03words/0303ed/030304iconic.html.

Another keyword, experiental mathematics, a lot of its literature is applied to education, and stays at very shallow level of the idea..

See voxel.css in css part in Frontend web development.

Synthetic mathematics? http://math.andrej.com/wp-content/uploads/2007/05/syncomp-mfps23.pdf


More visual & concrete mathematics

https://acko.net/

"Semi-concrete": http://cognitivemedium.com/emm/emm.html

Bret Victor

Introducing Guesstimate, a Spreadsheet for Things That Aren’t Certain Visual arithmetic on probability distributions!

http://ncase.me/ Explorable explanations!: http://explorableexplanations.com/

https://www.quantamagazine.org/20160531-set-proof-stuns-mathematicians/

Concurrent computing

guillefix 30th June 2016 at 2:01am

Concurrent programming

guillefix 21st July 2016 at 3:12am

Condensed matter physics

guillefix 22nd June 2016 at 4:53am

Condensed matter physics is the Physics of condensed matter. Below we look at the different broad types of condensed matter. The properties of condensed matter systems depend, among other things, on the chemical composition of the system (see Chemistry), and the physical laws the chemical components obey.

Condensed vs non-condensed

Condensed matter refers to Bulk matter in a condensed form, i.e. one that is composed of condensed phases. Condensed phases include mainly solid, and liquid. However, generally, it is a phase for which the particles adhere to each other strongly enough (by for example Intermolecular forces or Chemical bonds) relative to their kinetic energy so that the system remains approximately bound in the absence of external forces, or where the particles are sufficiently highly concentrated so that they interact strongly (for example non-attracting particles can be forced to condense by confining them in a small volume (or by some external force, like gravity), forcing them to be "nearly touching", as in a liquid or solid).

Non-condensed matter has constituents that are barely bound together, if at all, and thus often need to be confined, either naturally, or artificially, to be studied as a whole. The main types are: gases (see Fluid mechanics), and plasmas.

Solid vs fluid

A solid is a form of matter that can resist a considerable amount of stress without flowing (so that its only response is elastic).

A fluid is a form of matter that flows under virtually any amount of stress.

A viscoelastic material displays solid-like elasticity of short time scales, and fluid-like viscosity on long time scales.

There is really a continuum between these. For instance some Rubbers are closer to solids, while others are more clearly viscoelastic, depending on the ratio of elastic to viscous deformation.

Hard vs soft

Solid-state physics studies matter in hard form. Hard forms are characterized by strong inter-particle bonding (often Chemical bonds, when at room temperature). This bonding is strong enough that it makes the relative position of the particles essentially fixed, with thermal fluctuations making particles vibrate only a bit relative to this fixed position. It is also strong enough to resist relatively large external stresses (i.e. it doesn't flow) All forms of hard matter are solids.

Soft matter physics studies matter in other condensed forms (soft forms), where some or all (relative) positional degrees of freedom are "soft", that is, strongly affected by thermal fluctuations, so that they have large variances. It also includes forms with weak bonding so that the material can't resist barely any external stress without flowing. Soft matter can be a solid or a fluid.

Note that given the definition above, one expects an spectrum between the two types of matter, as the definitions involve quantities that can potentially take a continuous of values. Most materials in nature, however, can be classified in one or the other.

One of the most important properties of materials is that they exhibit different phases. These are understood through the study of Phase transitions. See Paul and Lubenski's book Principles of condensed matter physics.

Condensed forms of matter

Hard forms

Soft forms

Non-condensed forms of matter

There are also phases of matters that exhibit quantum effects. These are studied (along with other non-quantum phases that nontheless can be understood using quantum mechanics) in Quantum condensed matter physics

Order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system. Disordered systems See here and here, and here

  • Annelaed disorder: disorder of a system at thermal equilibrium.
  • Quenched disorder: disorder of a system in which some variable is out of equilibrium. For instance in glasses.

Physics of disordered systems

Discussion Meeting: Nonlinear Physics of Disordered Systems: From Amorphous Solids to Complex Flows

See Materials science for the applications of the principles of condensed matter physics to understanding and use of the wealth of materials in the world, both natural, and artificial.

For the study of the physics and chemistry at the interface between two phases, see Surface science.

CHANDRASEKHAR LECTURE SERIES

Strongly correlated systems: From models to materials

CMP YB channel

Conditional entropy

guillefix 3rd July 2016 at 2:12pm

In Information theory, the conditional entropy of a Random variable YY, conditioning on another random variable XX, is the average entropy of a random variable conditional on another random variable

H(YX)=x,yp(x,y)logp(yx)H(Y|X) = \sum_{x,y} p(x,y) \log{p(y|x)}

Conditional entropy video

Some results:

H(X,Y)=H(X)+H(YX)=H(Y)+H(XY)H(X,Y) = H(X) + H(Y|X) = H(Y) + H(X|Y)

Where we use the Entropy and Joint entropy of the random variables.

Proof

Conditional mutual information

guillefix 3rd July 2016 at 2:30pm

conditional_kolmogorov.png

guillefix 14th April 2016 at 10:58am

Conflict

guillefix 12th July 2016 at 12:39am

Conformal field theory

guillefix 11th June 2016 at 2:37pm

See MMathPhys course, and Critical phenomena. lecture notes

Field theory with conformal invariance.

Conformal invariance seems to be a generic feature of critical phenomena. Although this is not yet completely understood. Scale and conformal invariance in quantum field theory

Constitutive equations

guillefix 1st May 2016 at 8:42pm

(wiki) In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces.

They are often just phenomenological, because bulk material, or a sufficiently large amount of condensed matter, is a very complex system, made of many interacting particles. However, they should be, in principle, and sometimes are in practice, derivable from principles of Statistical physics, and often Non-equilibrium statistical physics.

Those constitutive relations that are used in the description of the autonomous time-evolution of a system often need Non-equilibrium statistical physics, as systems that macroscopically (i.e. the relevant averaged quantities) evolve in time are by definition out of equilibrium.

Constitutive relations for driven systems, that are in quasi-equilibrium, should be derivable from Equilibrium statistical physics.

Constitutive equations in non-equilibrium

Kinetic theory offers a foundation to derive constitutive equations from the microscopic details of the material. However, derivations are often hard, and give only qualitatively correct answers (more precisely, the answers are often correct up to an order 11 constant, because of approximations).

Non-equilibrium thermodynamics is often based itself on more or less phenomenological principles. However, these principles can be very useful for deriving constitutive relations for large classes of systems.

An example, of one of these principles is the principle that the rate of entropy production be maximal. This is used in this paper to derive the Allen-Cahn equations used to describe the evolution of phase fields (see Phase transition).

See On thermomechanical restrictions of continua for the paper proposing the above principle.

I'm sure there are other approaches, and I should learn more about Non-equilibrium statistical physics in general, to learn, and organize these important ideas better.

Constrained path integrals

guillefix 23rd January 2016 at 2:24am

See here (page 11) and in Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes) (page 42). Generalization of Lagrangian multipliers in finite optimization problems.

Contents

guillefix 22nd June 2016 at 4:51pm

Continuity of percolation phase transition

guillefix 11th June 2016 at 5:55pm

See Percolation theory

Continuity of the order parameter P(p)P_\infty (p) (the probability that an occupied site is in the infinite cluster for a given occupancy pp) at pcp_c is an open mathematical problem in the general case, but it is known to hold rigorously in 2D and d19d \geq 19 using lace expansion methods (Mean-Field Behaviour and the Lace Expansion). The conjecture that P(p)(p=pc)=0P_\infty (p) ( p = p_c ) = 0 for 3d183 \leq d \leq 18 remains however one of the open problems in the field (V. Beffara, V. Sidoravicius, Percolation Theory).

Continuous dynamical system

guillefix 8th July 2016 at 5:30pm

See Nonlinear system and Nonlinear continuous dynamical system, as most interesting dynamical systems are nonlinear.

A continuous dynamical system often refers to a Topological dynamical system on an infinite topological space. In this way, the system becomes a system of Ordinary differential equations.

Hyperbolic fixed point

Continuous function

guillefix 24th July 2016 at 12:57am

A Function between two Topological spaces f:(X,τ)(Y,τ)f: (X, \tau) \rightarrow (Y, \tau') is continuous if, for all OτO \in \tau', f1(O)τf^{-1} (O) \in \tau.

where f1(O)f^{-1}(O) is the Preimage of the set OO.

Continuous mathematics

guillefix 29th March 2016 at 3:33pm

Analysis offers the foundations of continuous matheatmics

continuous_vs_discontinuous_bifurcations.png

guillefix 14th March 2016 at 6:12pm

Continuum limit of percolation models

guillefix 11th June 2016 at 3:00pm

See Percolation theory

The continuum limit, if it is defined, is often a field theory. In particular, at the critical point, it is often a Conformal field theory, as percolation models at the critical point are found to have conformal symmetry.

John Cardy used this idea to find crossing probabilities between the opposite sides of a conformal rectangle filled with a conformally invariant infinitesimal lattice: Critical Percolation in Finite Geometries. Smirnov rigorously proved that Cardy’s conjecture holds for the continuum limit of site percolation on a triangular lattice: Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits.

Defining the continuum limit is tricky. See Correlation Functions in Two-Dimensional Critical Systems with Conformal Symmetry. .

Only certain CFTs, usually the minimal models, have been observed to possess the right structure to describe a critical lattice model in two dimensions. Due to the relatively few number of such theories, models with the same macroscopic but different microscopic properties are presumed to have identical continuum limits which correspond to the same CFT characterized by the value of the central charge c. This is a restatement of the notion of universality

Renormalization group

A relatively new method to describe the continuum limit of the critical lattice models is Schramm–Loewner evolution

Continuum mechanics

guillefix 11th May 2016 at 2:18pm

The Mechanics of most classical types of Bulk matter can be macroscopically described via continuum mechanics, which describes matter in terms of continuum equations, based on space-time varying fields that evolve according to Differential equations.

See Rheology, for the study of flow in particular.

Deformation

Control theory

guillefix 24th June 2016 at 1:04am

Control theory and control systems

guillefix 24th June 2016 at 1:24am

Why learn control theory

Signal processing

Control theory

Control systems

Normal control systems are usually classified as linear systems and nonlinear systems.

Switched systems

A switched system consists of continuous-time=discrete-time dynamical subsystems and a rule (supervisor) that determines the switching among them.

Control techniques by switching among different controllers have been applied extensively in recent years. Indeed, a switched controller can provide a performance improvement over a fixed controller

Switched systems consist of a decision layer and a control layer. The former is logical, i.e., discrete, and decides at a given time, which subsystem is activated. The latter usually corresponds to a set of normal control systems.

Periodicity and chaos from switched Mow systems: Contrasting examples of discretely controlled continuous systems

Controllability and reachability criteria for switched linear systems

Switching in Systems and Control

On partitioned controllability of switched linear systems

Finite automata approach to observability of switched Boolean control networks. Boolean control networks I think are Boolean networks with an external control.

It is pointed out that “One of the major goals of Systems biology is to develop a control theory for complex biological systems” [14]

See also Robotics

Conversation with Chico Camargo on GP map bias - 22/4/2016

guillefix 18th May 2016 at 5:51pm

Chico Camargo Hey man! I hadn't seen the recording, that's cool! I'll send you the slides via email. What diagrams do you want, just to make sure I send you the right thing? Guillermo Valle Pérez 4/22, 12:56pm Guillermo Valle Pérez Well the ones where you show the tree of binary states evolving to other states, and the ones showing the complexity vs frequency for example Chico Camargo 4/22, 12:59pm Chico Camargo Here's the whole thing - https://docs.google.com/presentation/d/1l-IgqXy1ZdBn__aBQX0fUH8Z2y6iuogwt753omsiyAw/edit?usp=sharing

29-06-2015 - Evolution 2015 Guaruja What Darwin didn't know: natural variation is structured Chico Camargo University of Oxford Evolution 2015 Guarujá, Brazil docs.google.com Chico Camargo 4/22, 1:01pm Chico Camargo Just one thing - recently I've changed my definition of phenotype to something more coarse-grained, so the plots for complexity have changed. But they're fine, the new ones say the same as the old ones. The robustness things, however, don't apply so directly to the new phenotype definition I've been exploring, so I would not include that part about robustness. All the rest is fine! Chico Camargo 4/22, 1:03pm Chico Camargo Finally, an interesting paper, in case you haven't seen it: http://rsif.royalsocietypublishing.org/content/royinterface/12/113/20150724.full.pdf Chico Camargo 4/22, 1:03pm Chico Camargo They say some cool stuff there, like "The properties of genotype –phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map." Guillermo Valle Pérez 4/22, 2:53pm Guillermo Valle Pérez Yeah i've seen that paper. I've found a way to predict more or less the number of sequences that map to the most frequent sequences, in average over the ensemble of transducers. It's only approximate, but it is about thinking about certain kinds of cycles, and how simpler cycles in the transducer are more probable, so sounds similar to your boolean network cycles thing. It's also related to the idea about constrained and unconstrained parts, which I think is the most fundamental. The idea for transducers is that there are states that give the same output independent of the output (so they are unconstrained). Outputs that admit cycling through these states have most of the input bits unconstrained. Then if one looks at what kinds of cycles there are, one sees that the most probable are the simplest ones, and these correspond to simple outputs Guillermo Valle Pérez 4/22, 2:59pm Guillermo Valle Pérez It's not totally rigorous, but estimating the probabilities of these cycles roughly gives the right frequency of the most probable strings like 11111111.. 011111111, 101010101... etc Chico Camargo 4/22, 3:02pm Chico Camargo That is very interesting! On the boolean networks it turns out that the probability of a cyclic output is an exponential with the cycle length, but the complexity bias exists even for cycles of the same length I still don't understand how the GP map works though, maybe because I don't fully understand what a transducer really is. What is the genotype and the phenotype (and the mapping), in their case? Guillermo Valle Pérez 4/22, 3:04pm Guillermo Valle Pérez grrwhen i said above "same output independent of the output" i meant "same output independent of the input"... Chico Camargo 4/22, 3:04pm Chico Camargo Oh yeah I god that grin emoticon So, I understand that a finite state transducer is like a finite automaton, but with two tapes: an input tape and an output tape reads a tape and writes another Guillermo Valle Pérez 4/22, 3:05pm Guillermo Valle Pérez http://galaxy.eti.pg.gda.pl/katedry/kiw/pracownicy/Jan.Daciuk/personal/thesis/img74.gif

Guillermo Valle Pérez 4/22, 3:07pm Guillermo Valle Pérez its a finite state machine. You start at a certain state and move to the state according to the symbol you read and following the transition according to the first symbol in "x/y". When you follow that transition you print a "y" oh sorry in that picture i showed you the x and y are swapped relative to my convention that picture isnt ver good wait Guillermo Valle Pérez 4/22, 3:09pm Guillermo Valle Pérez Guillermo Valle Pérez 4/22, 3:09pm Guillermo Valle Pérez thats one of the ones generated by my actual code sideways lol Chico Camargo 4/22, 3:09pm Chico Camargo Beautiful! Ok, it's a finite state machine with two tapes, rather than just traversing (and accepting or rejecting) an input string, it translates an input string to an output string Guillermo Valle Pérez 4/22, 3:09pm Guillermo Valle Pérez so you begin at state 0 and if you see a 0 you go to state 0 printing a 0, and if you see an 1 you go to state 1 also printing a 0 Chico Camargo 4/22, 3:09pm Chico Camargo Cool So you randomly generate a finite state transducer, and see how many input words give you each output word? Guillermo Valle Pérez 4/22, 3:10pm Guillermo Valle Pérez Yep Chico Camargo 4/22, 3:11pm Chico Camargo And the trends are the same even if you generate a lot of those transducers at random? Guillermo Valle Pérez 4/22, 3:11pm Guillermo Valle Pérez yeah the more you generate the more the graph seems to be a linear thing with a given spread in the frequency-vs-complexity, like the one i posted technically you could enumerate all transducers of a given number of states Chico Camargo 4/22, 3:13pm Chico Camargo So the trend is there even if you have a single transducer, but it's more obvious if you plot the results for a lot of them, is that what you're saying? I have more questions: How long are the input strings? How long are the output strings? You said you can enumerate them. How is the transducer represented? I was reading some stuff about formal language theory last night, and it relates so much to that, you have no idea Guillermo Valle Pérez 4/22, 3:14pm Guillermo Valle Pérez Well, the trend is mostly visible if you plot a lot of them. For a single one I tend to find quite some noise. The input strings ive tried are 9-15 bits long, but they can be anything I've made it so that the output strings are the same length as the input. I could make the variable length by adding an "empty" symbol as a possibility but havent tried that The transducers are represented as strings too I think, but I'm using a finite automaton generator, not generating them on my own because it's not that trivial to generate them really uniformly apparently. I think it's because many automatons would be equivalent, and it only generates distinct ones.. Chico Camargo 4/22, 3:17pm Chico Camargo I see Guillermo Valle Pérez 4/22, 3:17pm Guillermo Valle Pérez Tho I'm not sure how it's generating them under the hood tbh, atm Chico Camargo 4/22, 3:17pm Chico Camargo Sure Guillermo Valle Pérez 4/22, 3:18pm Guillermo Valle Pérez and i dont think the answer should be too different if you generated them in a more naive way Chico Camargo 4/22, 3:18pm Chico Camargo I agree with you One thing I'm trying to understand is what space is being mapped to what space But I'm slowly getting it Any string to any string. (well, binary strings in both alphabets, in this case) Guillermo Valle Pérez 4/22, 3:21pm Guillermo Valle Pérez yeah you can choose any alphabet. But i chose binary. and in my case its any binary string to given length to the same set well no there are some strings you can't get in the output so the output space is some subset of {1,0}^* {1,0}^n, n fixed i mean Chico Camargo 4/22, 3:23pm Chico Camargo And there are binary strings that can't be generated by that transducer. Sure Guillermo Valle Pérez 4/22, 3:23pm Guillermo Valle Pérez Yeah quite a few actually which make sense Chico Camargo 4/22, 3:24pm Chico Camargo It does. Guillermo Valle Pérez 4/22, 3:24pm Guillermo Valle Pérez becuase if there is redundancy, the phenotype space must be smaller, for a deterministic map smaller than genotype space Chico Camargo 4/22, 3:25pm Chico Camargo And because each transducer will in fact produce strings of a given shape, like "0 1^n 0 1 0^m 1" And when you choose the number of states in your transducer.. I would imagine I imagine very large transducers would be unnecessarily complex Guillermo Valle Pérez 4/22, 3:27pm Guillermo Valle Pérez well i choose a small number of states, like 5 so that it's simple Chico Camargo 4/22, 3:28pm Chico Camargo Yeap Guillermo Valle Pérez 4/22, 3:28pm Guillermo Valle Pérez I've tried more states and results are not too different, but I worry that I am taking a sample that is much smaller than all possible trandsucers of that size With smaller number of states like 2 or 3, maps seem too trivial also Chico Camargo 4/22, 3:30pm Chico Camargo I'd expect that the complexity bias would become too messy if the transducers were too large: you'd be using a very complex algorithm to map input to output, introducing a lot of complexity into the business I find it interesting that when you sample different transducers, you're sampling different GP maps. ...which is something that can evolve, as well. Just like you can change the parameters of an ODE instead of changing its initial conditions, you can change the GP map instead of its I/O Guillermo Valle Pérez 4/22, 3:32pm Guillermo Valle Pérez what i can't quite figure is how to relate these results more directly to other results like that of the boolean network or polyominoes. Yeah, in principles these things should map to a transducer, but how simple a transducer, and do they have some features that simply the {ensemble of all transducers} does not capture. I mean this is precisely the same problems with choosing random network null models in network theory.. Guillermo Valle Pérez 4/22, 3:32pm Guillermo Valle Pérez Yeah i also expect more noise for more states.. Chico Camargo 4/22, 3:33pm Chico Camargo So, normally a boolean network is the genotype, so your input string in this case. Same for an RNA sequence. Your transducer would be the actual map Guillermo Valle Pérez 4/22, 3:35pm Guillermo Valle Pérez "which is something that can evolve". Yeah the whole reason I did was just in the spirit of null models: see if one expects these features just looking at random simple maps, without any other constraint. But I also thought about, why choose the transducers uniformly at random, why not sample them according to the biased output of another transducer, that will produce simpler transducers more often. One can imagine a potentially infinite chain of GP maps determining GP maps, and it'd be interesting to see what one gets.. Guillermo Valle Pérez 4/22, 3:35pm Guillermo Valle Pérez Yeah the whole reason I did -> Yeah the whole reason I did this Chico Camargo 4/22, 3:36pm Chico Camargo I agree with the spirit of null models: that is totally the point Hoho, I know what you mean! In fact I think there is something else to it Guillermo Valle Pérez 4/22, 3:37pm Guillermo Valle Pérez Yeah this looks like whats called hyperparameter optimization in machine learning: when you optimize your machine learning model itself Or genetic programming with evolving GP maps, which has also been tried Another way to do this would be to make a transducer whose output changes the transducer itself, and see how that evolves which tbh sounds like the whole idea of genetic regulatory networks where the phenotype (proteins) in some sense change the GP map (genes->proteins) I guess when one does this one can then still define a meta GP map like what you do in the boolean networks Chico Camargo 4/22, 3:41pm Chico Camargo I think so Coupling GP maps is an interesting idea But you've gotta play with the timescales that that involves For example - also on that potentially infinite chain of GP maps you mentioned: Chico Camargo 4/22, 3:42pm Chico Camargo So, this chain of GP maps determining GP maps is sequential: once, in the history of life, life "chose" a set of basepairs, A-T, C-G. And it's been working pretty much with all that. And by "choosing" I mean that its rate of change slowed down. It could be from reaching a fitness peak, local or not, but the fact is that it slowed down. Then, at some point, life "chose" a genetic code: the way codons map to aminoacids. Once that choice was frozen, life has been working with it ever since. Then it chose some protein families. It chose chromosomes. Yada, yada, yada: (pretty much) frozen choices allowing more complex forms to emerge. And you could argue that the genetic code and these other things are still changing, but they're just changing very slowly, while other things change more quickly Guillermo Valle Pérez 4/22, 3:43pm Guillermo Valle Pérez Hm, I see what you mean Chico Camargo 4/22, 3:43pm Chico Camargo In a similar fashion, I see that with language. We aren't really changing our alphabet, or our grammar structures anymore, it seems like those evolved once and stopped, but they're just changing slowly. On the other hand, new words still appear all the time I think it makes total sense to get the transducer from a set of transducers - but if you're picking a simple transducer, you're probably already doing that Guillermo Valle Pérez 4/22, 3:44pm Guillermo Valle Pérez well im picking simple ones in the sens of small number of states Chico Camargo 4/22, 3:45pm Chico Camargo If the transducer can really be represented as a string, then I'd be sure of that Guillermo Valle Pérez 4/22, 3:45pm Guillermo Valle Pérez but i haven tried generating transducers from a transducer yet but in your example above it seems like it would like generating random transducers and then fixing to one. Then using that fixed transducers as maybe a building block out of which new meta transducers can be built... tho im not sure i understand where GP maps fit in all the biological examples you mention above Chico Camargo 4/22, 3:49pm Chico Camargo a GP map is a translation, an I/O machine, a transducer. Something that converts information of a kind into information of another kind Guillermo Valle Pérez 4/22, 3:50pm Guillermo Valle Pérez first the atcg is an alphabet, not a GP map right? Then it evolved the codon-aminoacid, which i see its a GP map. What is the protein family, and what do the chromosomes have to do with a GP map? i mean I would understand that gene-> protein is a GP map. Then protein->some cellular phenotype is another one.. Chico Camargo 4/22, 3:52pm Chico Camargo Ok, point taken, the ATCG is not a GP map. It is an alphabet. Let me put it this way: Chico Camargo 4/22, 3:59pm Chico Camargo Nature chooses a way store information, then it pretty much settles for that one according to some criteria like thermodynamical stability and to how much information you can store with that system - for instance, ATCG basepairs. Or, another way to store information, aminoacids. So now we have two alphabets, one with four letters, one with ~20. Then, once those had been pretty much chosen, eventually nature chose/found a way to translate between them. Or maybe it found the latter alphabet as an outcome of finding the GP map that converts information stored in DNA sequences to information stored in aminoacid sequences. But anyway, it chose the alphabets, then it chose the GP map. Focusing on the GP maps: DNA-> Proteins, Protein shape-> Protein function in the cell, gene networks -> cellular phenotype, cell type composition -> tissue structure/function/identity, whatever mapping from one kind of information to another (but just mappings, so nothing about the chromosomes I had mentioned). My point is that I think often nature tries many "transducers", many I/O machines, and eventually it chooses some of them, and builds on top of them. So the I/O alphabets and GP maps are conserved along evolution. In this sense, humans probably use the same cell types as other apes. And we all use the same body plans as other mammals. And the same embryonic development genes as worms. Etc etc downards, ad infinitum. Chico Camargo 4/22, 4:01pm Chico Camargo I'm saying this because some structures evolve quickly and others don't: in the hierarchy of which genes regulate which other ones, the further up a gene is placed, the less it changes over time: the more conserved it is. And I think that makes total sense, considering that it is part of a GP map that was "chosen" long ago Guillermo Valle Pérez 4/22, 4:02pm Guillermo Valle Pérez and because many things depend on it, it's hard to change right? Chico Camargo 4/22, 4:02pm Chico Camargo that too in theory you could change it, but today that'd mean a drastic reduction on fitness it'd be like trying to reinvent the genetic code: it won't work, life relies too much on that Guillermo Valle Pérez 4/22, 4:03pm Guillermo Valle Pérez that's what I mean, unless you change many things along with it, in just the right ways.. which is highly unlikely Chico Camargo 4/22, 4:03pm Chico Camargo Exactly That'd be like changing the English grammar, or semantics On the other hand, if the evolutionary innovation is pretty fresh, there's probably not much relying on it, so it's ok to break it Guillermo Valle Pérez 4/22, 4:06pm Guillermo Valle Pérez This is just why it's so hard to say switch from qwerty to dvorak keyboards, it's changing your whole word-hand movement GP map, on which your whole internet life depends Chico Camargo 4/22, 4:06pm Chico Camargo haha yeah! So, I think the easiest story you can tell is that a transducer is a very simple a GP map, without all the biological details. Which features did you say are not captured by the transducers? Guillermo Valle Pérez 4/22, 4:09pm Guillermo Valle Pérez Well in theory all GP maps should potentially be expressed as transducers, though probably of many more states. Having 5 states is like considering the set of sufficiently coarse-grained biological models I suppose.. Chico Camargo 4/22, 4:11pm Chico Camargo Hm, there is one thing I still don't see What you said resonates very well with the ideas in that paper I sent you: that all these properties come from the sequence nature of genotypes and phenotypes or genotypes at least. and sequences = I/O strings, great Guillermo Valle Pérez 4/22, 4:13pm Guillermo Valle Pérez If you consider any number of states, transducers pretty much include everything else.. But I'm only considering simple ones. A simple transducer can either be justified as some process during the earliest stages of evolution of some form of life (natural or artificial) where the system itself is actually simple. Say a few dots in game of life, or a few molecules. Then the justification to apply to more complex life is probably the same as why we use coarse-grained models: yeah life is full of intricate details, but it is organized in such a way that is approximately simple. I think this would like saying complex life is really behainv as a transducer with many many states, but this transducer is coarse-grainable to a transducer of few states. This actually agrees nicely with the idea that life current GP maps were in some way determined by previous GP maps, and thus are expected to be simpler than just a random GP map from ATCG to tissue... Chico Camargo 4/22, 4:14pm Chico Camargo Yeah I'm happy with that there is only one thing that I still fail to agree/understand A transducer translates input to output by treating the input as an (ordered) string: it first reads the first character, then the second, then the third And even though the map in the end is from string A to string B, it's calculated from this step-by-step reading Guillermo Valle Pérez 4/22, 4:15pm Guillermo Valle Pérez that's how it "mechanically" works yeah Chico Camargo 4/22, 4:16pm Chico Camargo Yeah But for example, a gene network. The network, really. It can be written as a string, but you can also do any permutations on the gene order, and that'd give you a different string. The GP map for gene networks is also from string to string, but it doesn't rely on reading anything step by step and it's harder for me to talk about "what parts of the string are unconstrained", for instance Guillermo Valle Pérez 4/22, 4:17pm Guillermo Valle Pérez "written as a string, but you can also do any permutations on the gene order, and that'd give you a different string." but it'd give you the same network, you mean? Chico Camargo 4/22, 4:17pm Chico Camargo It'd give you a network that is isomorphic to it (like, B->A instead of A->B). That could possibly give you the same phenotype or an isomorphic phenotype. Hm Guillermo Valle Pérez 4/22, 4:18pm Guillermo Valle Pérez but i mean, if you have some map from finite strings to finite strings, it is always in principle writeable as a finite transducer Chico Camargo 4/22, 4:18pm Chico Camargo Oh. That's true. Guillermo Valle Pérez 4/22, 4:18pm Guillermo Valle Pérez the transducer may be quite large though Chico Camargo 4/22, 4:18pm Chico Camargo There's a theorem that does that, right? that says that shit. That's awesome. Guillermo Valle Pérez 4/22, 4:19pm Guillermo Valle Pérez yeah... I mean it's kind of floating around the results of turing and church and co. I think A Turing machin is just a kind of finite transducer with infinite memory so like infinite number of states.. Chico Camargo 4/22, 4:20pm Chico Camargo yeah yeah Guillermo Valle Pérez 4/22, 4:20pm Guillermo Valle Pérez But the thing is that a map between two finite sets can always be expressible with finite memory Chico Camargo 4/22, 4:20pm Chico Camargo no, there is, I'm sure I read this theorem yesterday, it's all coming back Guillermo Valle Pérez 4/22, 4:20pm Guillermo Valle Pérez ah cool Chico Camargo 4/22, 4:21pm Chico Camargo What I actually read: Guillermo Valle Pérez 4/22, 4:21pm Guillermo Valle Pérez You can make your own! Map any finite set of inputs to any output: http://examples.mikemccandless.com/fst.py?terms=pepe%2F33%0D%0Amoth%2F1%0D%0Apop%2F2%0D%0Astar%2F3%0D%0Astop%2F4%0D%0Atop%2F5%0D%0A&cmd=Build+it! examples.mikemccandless.com examples.mikemccandless.com Chico Camargo 4/22, 4:23pm Chico Camargo There is a correspondence between formal grammars (sets of strings) and automata (that might accept or reject a string, saying that it does or doesn't belong in that grammar). Finite grammars map to finite automata, Context-free grammars to push-down automata, and so on, Until phrase structure grammars that map to Turing machines. Grammars and finite automata are slightly different from FST, but I'm sure there must be a version of that theorem that talks about FST. Chico Camargo 4/22, 4:24pm Chico Camargo Ah brilliant! That's really interesting, since any finite set is enumerable (and more specifically enumerable in binary), any finite set can be translated to strings. But that alone doesn't mean that you would have any bias of any sort Now it's clear to me that it isn't about the sequence order, as in which part comes first, but simply from the hypercube nature of the space of sequences Guillermo Valle Pérez 4/22, 4:31pm Guillermo Valle Pérez See page 20 of http://web.cs.ucdavis.edu/~rogaway/classes/120/spring13/eric-transducers.pdf there it effectively says what we want, that they can encode any map between finite sets web.cs.ucdavis.edu web.cs.ucdavis.edu Guillermo Valle Pérez 4/22, 4:32pm Guillermo Valle Pérez what is comes "from the hypercube nature of the space of sequences"? Chico Camargo 4/22, 4:32pm Chico Camargo yup! I see it Guillermo Valle Pérez 4/22, 4:33pm Guillermo Valle Pérez Also the bias comes from constraining the maps to be simple in the sense of few states in the fst, I think. Clearly if you considered all possible maps between two sets there wouldn't be bias in average i meant: what comes "from the hypercube nature of the space of sequences"? my internal fst is making so many mistakes.. Chico Camargo 4/22, 4:34pm Chico Camargo I agree that if you considered all the possible FST you wouldn't get any nice average, you need simple FSTs haha have you switched to dvorak? Chico Camargo 4/22, 4:34pm Chico Camargo What I mean is that the paper I sent you they say: "The Fibonacci GP map therefore offers strong evidence that the sequential nature of biological information determines the fundamental structure of GP maps, which in turn has a profound impact on the course of biological evolution." Guillermo Valle Pérez 4/22, 4:35pm Guillermo Valle Pérez no i meant internal as in in the brain. I wanted to swithc to dvorak but havent had time tongue emoticon Yeah I didn't get that part Chico Camargo 4/22, 4:35pm Chico Camargo And when they say "the sequential nature", I think it suggests that the fact that information is stored in ordered sequences. But I think it isn't so much about that Guillermo Valle Pérez 4/22, 4:35pm Guillermo Valle Pérez Yeah I thought it was more about it having constrained and unconstrained parts which doesnt say anything about how the information is stored/read Chico Camargo 4/22, 4:36pm Chico Camargo Yeah. I think the order doesn't really matter. exactly. and the unconstrained parts could be in the beginning, middle, end, or just have no order Guillermo Valle Pérez 4/22, 4:37pm Guillermo Valle Pérez yeah, in fact in the fsts unconstrained parts are not a fixed portion of the input string, but depend on the previous portions of the input string Chico Camargo 4/22, 4:37pm Chico Camargo as long as you have an unconstrained part whose contribution to the designability of your phenotype size grows exponentially (or just a lot) with the size of the unconstrained part: like 4^L, in the case of RNA, or 2^L in the binary case Guillermo Valle Pérez 4/22, 4:38pm Guillermo Valle Pérez "An unconstrained part" should more correctly be a property of the FST mechanism than a part of the input string. In the FSTs, an unconstrained part is a state whose outputs are the same irresepective of input. Chico Camargo 4/22, 4:39pm Chico Camargo indeed unconstrained means ignored by the GP map Guillermo Valle Pérez 4/22, 4:40pm Guillermo Valle Pérez In the case of the FSTs as you grow the length of the input, the input has more chances of looping through these states and everytime you go through it the number of possibilities multiplies by 2, so it grows almost exponentially Chico Camargo 4/22, 4:40pm Chico Camargo so if the GP map doesn't care about sequence/string order, the unconstrained parts of the genotype won't be ordered, won't be "after a stop codon" yeah Guillermo Valle Pérez 4/22, 4:41pm Guillermo Valle Pérez well there may be some order to them, but may be more complicated and subtle, and not-apparent Chico Camargo 4/22, 4:43pm Chico Camargo but see, a gene network isn't ordered per se. When you decide to represent it as a string, sure, you've ordered it. For the same GP map, different orderings will produce different transducers, and therefore different orderings of the unconstrained parts, but there is no inherent order on a gene network Chico Camargo 4/22, 4:44pm Chico Camargo There is, though, an exponential contribution to designability: 3^X, where X is the number of "unconstrained" interactions Guillermo Valle Pérez 4/22, 4:44pm Guillermo Valle Pérez And you can actually construct GP maps where the bias is towards some designed complex sequence instead of towards simple ones. However these require very special kinds of structures with the sequence coded into it, while a bias towards a simple output requires a simple structure, and thus appears often in FSTs Chico Camargo 4/22, 4:45pm Chico Camargo precisely. Guillermo Valle Pérez 4/22, 4:46pm Guillermo Valle Pérez yeah i agree that for the network the output shouldnt depend on the ordering. However, maybe due to the nature of the FST different ordering conventions may need different FSTs Chico Camargo 4/22, 4:46pm Chico Camargo Yeah Guillermo Valle Pérez 4/22, 4:47pm Guillermo Valle Pérez "3^X, where X is the number of "unconstrained" interactions". what are the uncstrained interactions? Chico Camargo 4/22, 4:47pm Chico Camargo the genotype for a gene network is the network's directed graph each link between nodes A and B can be +, -, or non-existent (0). That's what I called 'interactions': these links Guillermo Valle Pérez 4/22, 4:49pm Guillermo Valle Pérez and it is unconstrained if it doesnt affect the phenotype you define? Chico Camargo 4/22, 4:50pm Chico Camargo Exactly. When I said "There is, though, an exponential contribution to designability: 3^X, where X is the number of "unconstrained" interactions", I meant that if there are X interactions that can be a +, a - or an 0 and that won't make a difference for the resulting phenotype, they'll be increasing the designability of that phenotype by 3^X. Guillermo Valle Pérez 4/22, 4:53pm Guillermo Valle Pérez It'd be interesting to find the actual FST for the network description to cycle/phenotype and see if those unconstrained parts can be seen as unconstrained states in the fst Chico Camargo 4/22, 4:54pm Chico Camargo I mean, if for every GP map there is a FST, it should be I'm still pondering This sequence story: The cause for all these properties would not be "sequential nature of biological information", but the fact that in nature you often have unconstrained parts whose contribution to the designability of your phenotype size grows exponentially. Guillermo Valle Pérez 4/22, 4:58pm Guillermo Valle Pérez Yeah Chico Camargo 4/22, 4:58pm Chico Camargo Well, it's from the same principles behind PCA, that most things need a short description that's sloppiness, essentially Guillermo Valle Pérez 4/22, 4:58pm Guillermo Valle Pérez PCA? Chico Camargo 4/22, 4:59pm Chico Camargo Principal Component Analysis. It's a technique that effectively reduces the dimensionality of a dataset by finding a set of axes (the base, in linear algebra terms) where most of the variation in your dataset can be described by the first axes Guillermo Valle Pérez 4/22, 5:00pm Guillermo Valle Pérez Ah yeah. Yeah I mean, we are trying to find (at least part) of the explanation of the simplicity in the word smile emoticon Chico Camargo 4/22, 5:01pm Chico Camargo in the word and in the world? wink emoticon Guillermo Valle Pérez 4/22, 5:02pm Guillermo Valle Pérez haha yeah lucky mistake Chico Camargo 4/22, 5:02pm Chico Camargo Man, I'm really hungry, I gotta get some lunch But let's keep talking about this! This is really exciting, and it's awesome to talk to you about that grin emoticon Also, would you send me your code so I play with it as well? Guillermo Valle Pérez 4/22, 5:04pm Guillermo Valle Pérez Sure, i'll put it on github and share! And yeah we'should talk again Chico Camargo 4/22, 5:04pm Chico Camargo Sweet! See you later then! Guillermo Valle Pérez 4/22, 5:04pm Guillermo Valle Pérez like emoticon Guillermo Valle Pérez 4/22, 5:18pm Guillermo Valle Pérez https://github.com/guillefix/fst-bias

guillefix/fst-bias fst-bias - Code for the exploration of bias for simplicity in the output of random finite state transducers github.com Chico Camargo 4/22, 5:52pm Chico Camargo Cheers!

Convolutional code

guillefix 3rd July 2016 at 5:04am

Convolutional neural network

guillefix 24th June 2016 at 1:56am

Nando's vid

http://cs231n.github.io/

http://cs231n.github.io/convolutional-networks/

http://cs231n.stanford.edu/syllabus.html

Convnet demo on the web! details here

Convolution

In The "c1 feature maps" are a set of 2D arrays of neurons. Each array looks for a feature, and a point in the array would represent the location of that feature. To accomplish this, that point of that array is connected to a set of pixels centered in the corresponding point in the input image (an array of pixels). We have much less parameters because for each of these 2D arrays we only specify the parameters for one of the neruons in that array, all other neurons are identical, just connected to displaced sets of pixels.

What is convolution. Correlation. Flip parameters vector (or array..) and rewrite the correlation, we get a convolution. Of course, there's much more to convolutions, including the convolution theorem for e.g.

Stride How much you jump in pixel space (or in previous layer) when you move from one point to another in a feature layer.

Can also expand boundary (zero padding) to keep layer gotten by convolution is of same size as original layer.

Nice example

So many indices!

Pooling

This is what it does. It downsamples. For memory, and invariance (being more insesitive to perturbations).

We can also apply non-linearities in between layers of course, like for contours enhacement

Use as many of these layers Iconvolutions and poolings) as we can train, 20+ (Deep learning)

At the end we may have a fully connected neural layer, to do the classification, but researchers are questioning if it is that useful..

We may visualize the features in the feature maps by visualizing the matrices of parameters.

Sentence ConvNets

Vid

Sentence DynConvNet

Document models (Misha Denil)

Natural language processing

MatConvNet: CNNs for MATLAB

Cosine similarity (Network theory)

guillefix 12th February 2016 at 12:55pm

Cosine similarity (a.k.a. Salton's cosine) is a measure of structural similarity of two nodes in a network. It counts the number of common neighbours of nodes i and j (given by nij=kAikAkjn_{ij} = \sum_k A_{ik}A_{kj}) and divide by the geometric mean of the degrees of i and k:

σij=nijkikj\sigma_{ij}= \frac{n_{ij}}{\sqrt{k_i k_j}}

where σij\sigma_{ij} is the cosine similarity. The formula is the same as the cosine of the angle between the column of node i and row of node j considered as vectors, hence the name.

Cosmography

guillefix 6th February 2016 at 1:29am

Cosmology

guillefix 7th May 2016 at 5:06pm

Counterculture movements

guillefix 4th February 2016 at 9:46pm

Craft

guillefix 21st July 2016 at 12:51am

Craft tools

guillefix 21st July 2016 at 12:54am

See for instance:

Textile art tools

Creating new maths

guillefix 23rd January 2016 at 12:12am

Creating new maths is often done by generalizing old maths to new places. See Generalized function, or how the reals generalized the fractions, etc.

How does it feel like to invent maths?

Critical phenomena

guillefix 20th June 2016 at 5:41pm

Critical phenomena in networks

guillefix 15th June 2016 at 4:53pm

Critical phenomena in percolation

guillefix 16th June 2016 at 8:16pm

Critical phenomena in Percolation occurs at the critical value of the occupation probability corresponding to the Percolation phase transition, which separates the percolating and the non-percolating phases.

Critical phenomena

Percolation models at the critical point show several interesting critical phenomena:

  • Symmetries
  • Fractal structure of the critical percolation clusters. Scaling invariance leads to the self-similarity characteristic of Fractals, and indeed the clusters have fractal geometry. Even for ppcp\neq p_c, the clusters are fractal at length scales lξl\ll\xi, the correlation length, and non-fractal (Euclidean) at larger lengthscales. An argument using the scaling hypothesis, and the number of nodes (mass) of clusters, shows that the fractal dimension of the clusters is also universal, as it is related to other universal critical exponents (see pages 13-14 in Saberi's review). There are other fractals dimension that one can define, like the one for the minimum length path between points, or those for perimeter, backbone, dangling ends, and red sites (or bonds).

Scaling hypotheses

There are a number of scaling hypotheses for several quantities for percolation near criticality (see Renormalization group for origin of scaling hypotheses).

Upper critical dimension

dcd_c. It is believed that when ddcd \geq d_c , the percolation process behaves roughly in the same manner as percolation on an infinite regular tree and their critical exponents take on the corresponding values given by mean-field theory

Real-space renormalization group

Renormalization Group Theory - Percolation. In particular, see here.

A real-space renormalization group for site and bond percolation

See also here.

Scaling theory of percolation clusters

Cryptography

guillefix 24th June 2016 at 1:15am

Culture

guillefix 17th May 2016 at 1:16am

Culture (/ˈkʌltʃər/) is, in the words of E.B. Tylor, "that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society."

https://en.wikipedia.org/wiki/Culture

Portal:Contents/Culture and the arts

Analysis of flag designs

Custom stylesheet

guillefix 24th July 2016 at 1:25am

.tc-search-results * { color: black !important; }

html button { /*background: #222;*/ border-color: #222; }

html select { /*background: #222;*/ }

/* .subtitle-dark { color: #777; } */

html .tc-page-container-wrapper { min-height: 100%; background: rgba(0, 0, 0, 0.53); }

.tc-page-container-wrapper > div { min-height: 100vh; }

code { /*color: #DBA2B0;*/ }

html .tc-tag-label { /*background-color: rgba(227, 230, 101, 0.6);*/ }

.img-centered { display: block; margin-left: auto; margin-right: auto }

strong { /*color: #A5F7DA;*/ }

Color picker

Opacity of background


Opacity: 50

Cyberpunk

guillefix 3rd April 2016 at 1:39am

Cybersecurity

guillefix 24th June 2016 at 1:15am

Cyberself

guillefix 9th June 2016 at 6:45pm

Google Keep

http://guillefix.me

Polymath quest hashas Inner Universe [extension of #cyberself]

Social media

Backup data. Mind data.

Overleaf

See DB\Cosmos, etc.... Dropbox..

Cylinder set

guillefix 14th July 2016 at 2:16pm

Natural Open sets forming a basis a Product topology

Definition. This is not right I think, he is defining open cylinders which form a subbase (and he's not even defining all open cylinders). See Product topology for more.


https://www.wikiwand.com/en/Cylinder_set

D'Alembert's principle

guillefix 10th June 2016 at 2:12pm

D'Alembert's principle in overdamped dynamics

guillefix 10th June 2016 at 2:13pm

d3js

guillefix 20th July 2016 at 1:40pm

Data & Knowledge

guillefix 28th June 2016 at 4:26pm

The world is full of information, much more than can be captured in this TiddlyWiki and its Cosmography section, and elsewhere..

There are many people who have made great efforts to make a lot of information readily available in an organized fashion (data), and also to make sense of it (knowledge), for example by visualizing it.

United Nations Statistics Division

Gapminder Hans Rosling website

Places & places - Mapping science

http://gdeltproject.org/

https://twitter.com/explorables

http://bionumbers.hms.harvard.edu/

http://populate.tools/


Main portal for all of Wikipedia content: https://en.wikipedia.org/wiki/Portal:Contents

A nice categorization: Wiki Category:Fundamental categories

Wiki Portal:Contents/Portals

Portal:Contents/Categories

https://en.wikipedia.org/wiki/Category:Indexes_of_topics

https://en.wikipedia.org/wiki/User:West.andrew.g/Popular_pages http://wikitop.alwaysdata.net/wikitop_en_portal.html

Wiki Portal:Contents/Reference

Portal:Featured portals

Dictionaries, for instance: https://en.wiktionary.org/wiki/Wiktionary:Main_Page

https://en.wikipedia.org/wiki/Category:Main_topic_classifications

https://en.wikipedia.org/wiki/Special:AllPages

http://nptel.ac.in/course.php?disciplineId=111


Knowledge organizing

TW, https://github.com/ether/etherpad-lite, http://kune.cc/


Collections of technical books: https://www.safaribooksonline.com/learn/

Libgen, sci-hub.io, bookzz . org

https://www.reddit.com/r/Scholar/comments/3bs1rm/meta_the_libgenscihub_thread_howtos_updates_and/


hmm http://aaaaarg.fail/collection/list

http://corp.yewno.com/

Data compression

guillefix 4th July 2016 at 11:03pm

See Information theory

Data compression refers to the problem of finding a code that makes the average length of an encoded message as short as possible. This is sometimes called "source coding" because the most compressed code depends on the properties of the Information source producing the message.

https://en.wikipedia.org/wiki/Data_compression

Data compression theory

Lossless compression

Source coding theorem

Lossy compression

Rate compression theory

Compression - Computerphile Entropy in Compression - Computerphile

Data compression codes


Algorithmics on compressed objects

Data compression codes

guillefix 1st July 2016 at 6:32pm

Data processing theorem

guillefix 5th July 2016 at 12:45pm

Data resources

guillefix 5th April 2016 at 8:54pm

http://crunchbase.linkurio.us/demo/

https://www.crunchbase.com/#/home/index

Machine learning data sets

IMAGENET semantically categorized image database

WordNet. Semantically structured and linked word database

http://imgur.com/a/K4RWn

Data science

guillefix 13th July 2016 at 3:39pm

Data structure

guillefix 7th July 2016 at 6:48pm

Tree (data structure)

Heap (data structure)

Tuple

Stack

Queue

List

Graph

See Lynda.com videos.

Data transmission

guillefix 1st July 2016 at 4:13pm

See Information theory for more details. See also Communication theory

Data transmission refers to the transfer of information from one entity to another, by means of a Data transmission system .

Data transmission system

See more here: Data transmission system

Properties of data transmission systems

Types of communication channel

Types of transmitter/receivers

These are mostly specified by the code they use. In data transmission systems, these are mostly Error-correcting codes.

Data transmission theory

The main desired properties of a data transmission system, and thus the main subjects of study are:

  • Reliability. How unlikely is the receiver to interpret message wrongly?
  • Speed. What is the maximum rate at which information can be sent over the channel? Measured in (bits decoded)/(channel bits sent)

Thus the main problem of study is: for a particular communication channel, find code so that data transmission rate is as high as possible, while receiver receives the information with negligible probability of error.

This is sometimes called "channel coding" because the most reliable code depends on the properties of the channel. This is done by finding codewords (sequences of input values) such that their images are as disjoint as possible. This is equivalent to sphere packing in high dimensions.

The main result in data transmission theory is the Channel coding theorem, which gives a fundamental limit to the data transmission rate that can be achieved by a code, while keeping error rates negligible. This limit turns out to be the Channel capacity.

Data transmission system engineering

The goals stated above for a data transmission system are achieved in two main ways:


See http://pfister.ee.duke.edu/thesis/chap1.pdf, and other chapters.

Entropy in Compression - Computerphile

Data transmission system

guillefix 1st July 2016 at 3:14pm

A data transmission system is the middle part of a Communication system, composed of:

Data type

guillefix 14th July 2016 at 2:51pm

https://en.wikipedia.org/wiki/Data_type

In computer science and computer Programming, a data type or simply type is a classification identifying one of various types of data, such as real, integer or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored.

http://programmers.stackexchange.com/questions/291950/are-data-type-declarators-like-int-and-char-stored-in-ram-when-a-c-program-e

Weak And Strong Typing

static vs dynamic typing: http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages (see third answer).

How do compiled dynamically typed languages work? Do they store type data (unlike it says here?)

https://en.wikipedia.org/wiki/Type_inference

http://stackoverflow.com/questions/1393883/why-is-dynamic-typing-so-often-associated-with-interpreted-languages

Data types

Integer

Float

String

Data visualization

guillefix 20th July 2016 at 1:43pm

Database

guillefix 30th June 2016 at 1:36am

Database theory

Relational databases

See Lynda.com videos.

de Solla Price's model

guillefix 15th February 2016 at 11:02pm

also known as Price's model. The de Solla Price's model is a model used to explore the effect of preferential attachment in the formation of a network on the structure of the network. See Models of network formation for more information.

Proposed in the study of citation networks. These have properties:

  • New papers almost only cite existing ones. The network is thus approx. a directed acyclic graph.
  • Node: paper. Edge: citation of a paper to an existing paper.

The model defines the average number of papers cited by a new paper (i.e. the average out-degree) to be cc (and the distribution around cc to be sufficiently well-behaved, for instance, the variance should be finite).

The main assumption of the model is that the probability of each new edge created whew we add a new node only depends on the degree of that node (on the in-degree to be precise, i.e. the number of citations it has). In particular it assumes an affine preferential attachment:

qi=ki+ai(ki+a)=ki+aN(a+c)q_i=\frac{k_i+a}{\sum_i(k_i+a)}=\frac{k_i+a}{N(a+c)}

wherekikiink_i \equiv k_{i}^{\text{in}} is the in-degree, and we have made use that for directed networks, kiin=kiout=c\langle k_{i}^{\text{in}} \rangle =\langle k_i^{\text{out}}\rangle=c . Finally, a>0a>0 is introduced so that nodes can get edges even if they don't have any in-degree yet (otherwise they will always stay like that, and the model wouldn't really be realistic).

Note that a new paper can cite an existing paper more than one times in this model, but the frequency at which these double-edges occur is low, and in the limit NN\rightarrow \infty they are subdominant.

qiq_i is the probability that a new edge is connected to node ii. On average cc edges are added (and the probability over number of edges, whose average is cc, is independent of the probability qiq_i), therefore the expected number of edges added to node ii is cqicq_i. Even though the probabilities for each node getting an edge are not independent, the expected number of edges added over a set of nodes, is the sum of the cqicq_i (see Probability theory Note 1). In particular, the expected number of edges added to all nodes with in-degree kk, Npk(N)Np_k(N) of them (where pk(N)p_k(N) is the degree distribution at the when there are NN nodes in the network (note that this changes, as we are adding nodes in the process of formation)) is:

Npk(N)ck+aN(a+c)=ck+aa+cpk(N)Np_k(N) c \frac{k+a}{N(a+c)}= c \frac{k+a}{a+c} p_k(N)

We can now write a master equation, which for k1k\geq 1 is:

(N+1)pk(N+1)=Npk(N)+c(a+k1)a+cpk1(N)c(a+k)a+cpk(N)(N+1)p_k (N+1) =Np_k(N)+\frac{c(a+k-1)}{a+c}p_{k-1}(N)-\frac{c(a+k)}{a+c}p_k(N)

or in words:

# with degree k when total is N+1=# with degree k when total was N\# \text{ with degree k when total is } N+1=\# \text{ with degree k when total was N}

+# with degree k-1 when total was N that gained one edge+\#\text{ with degree k-1 when total was N that gained one edge}

# with degree k when total was N that gained one edge-\#\text{ with degree k when total was N that gained one edge}

The equation for k=0k=0 is a bit different:

(N+1)p0(N+1)=Np0(N)+1caa+cp0(N)(N+1)p_0 (N+1) =Np_0(N)+1-\frac{ca}{a+c}p_0(N)

where there are no nodes with degree 1-1, and there is an extra +1+1 due to the node we just added.

Now, taking the limit NN \rightarrow \infty, and using the shorthand pk:=pk()p_k :=p_k(\infty), the k1k\geq 1 eq. becomes:

pk=c(a+k1)a+cpk1c(a+k)a+cpk=ca+c((a+k1)pk1c(a+k)pk)p_k =\frac{c(a+k-1)}{a+c}p_{k-1}-\frac{c(a+k)}{a+c}p_k=\frac{c}{a+c}((a+k-1)p_{k-1}-c(a+k)p_k)

p0=+1caa+cp0p_0=+1-\frac{ca}{a+c}p_0

where the terms proportional to NN have cancelled out.

We can then solve these to get a recursion relation for pkp_k with initial condition p0p_0 from the second equation. The solution can then be expressed in terms of Euler Beta functions, which in the asymptotic limit of large kk, give a power-law decay with power:

α=2+ac\alpha=2+\frac{a}{c}.

Thus, many scholars believe that this simple model may describe the fundamental mechanism by which power laws are obtained in many real-world networks.

Computer simulation of de Solla Price's model

See section 14.1.1 of Newman's book.

Straighforward simulation of model is slow. Alternative was proposed by Krapivsky and Redner which follows the following rule

With probability c/(c+a)c/(c+a) choose a vertex in strict proportion to in-degree. Otherwise choose a vertex uniformly at random from the set of all vertices.

The trick to do the part of choosing a vertex in proportion to in-degree is done by choosing an edge (stored in a list) with uniform probability and then choosing the vertex it points to, so that the probability of choosing is exactly proportional to how many edges point to it, i.e. its in-degree kik_{i}.

Debugging

guillefix 31st January 2016 at 12:32am

Decision theory

guillefix 12th July 2016 at 12:32am

The mathematical study of strategies for optimal decision-making between options involving different risks or expectations of gain or loss depending on the outcome.

https://www.wikiwand.com/en/Decision_theory

See also Machine learning, and Reinforcement learning

Deep art

guillefix 19th July 2016 at 7:08am

Deep learning

guillefix 9th July 2016 at 4:55am

Deep learning companies and projects

guillefix 26th April 2016 at 7:28pm

Deep learning on the browser

guillefix 9th July 2016 at 4:17am

Deep reinforcement learning

guillefix 5th July 2016 at 1:19pm

Deep sea creatures

guillefix 7th May 2016 at 1:09am

http://gizmodo.com/there-are-some-seriously-gnarly-creatures-at-the-bottom-1775158529?utm_campaign=socialflow_io9_facebook&utm_source=io9_facebook&utm_medium=socialflow

A gorgonocephalid basket stars, a relative of the brittle star.

A hydromedusa jellyfish, spotted near “Enigma Seamount” at a depth of 3,700 meters.

Degree ceremony (MMathPhys)

guillefix 14th July 2016 at 5:07pm

Degree ceremony 2015-2016 more

Your Degree Ceremony
Student NameGuillermo Jorge Valle Perez
Award ProgrammeMMathPhys Mathematical & Theoretical Physics
CollegeMagdalen College
Date of CeremonyFriday 30 September 2016 
Time 2:30 pm 
Number of Guaranteed Ceremony Tickets3
Hold StatusNone
Ceremony StatusYou have chosen to attend the above ceremony
You have agreed to the University Terms and Conditions regarding Degree Ceremonies.

Degree of a vertex (Graph theory)

guillefix 27th January 2016 at 2:53pm

The degree, kik_i, of a vertex, ii, is the number of edges connected to the vertex. For an undirected graph with nn vertices, it is related to the adjacency matrix by:

ki=j=1nAijk_i=\sum_{j=1}^n A_{ij}

Also the total number of edges mm is:

2m=j=1nkj=ijAij2m=\sum_{j=1}^n k_j=\sum_{ij}A_{ij}

as each edge has two ends ('stubs').

The mean degree, cc is then:

c=2mnc=\frac{2m}{n}.

Aside: a node with a "high" degree is sometimes called a 'hub'.
A network where all nodes have same degree is called 'regular'.

The number of edges in a complete (i.e. with max # of edges) simple graph can be found by counting the number of edges, where each edge represents a choice of a pair of vertices where the order doesn't matter. The number of such choices is (n2)\binom{n}{2}.

The density (or connectance), ρ\rho, is the fraction of these that are actually present:

ρ=m(n2)=2mn(n1)=cn1cn\rho=\frac{m}{\binom{n}{2}}=\frac{2m}{n(n-1)}=\frac{c}{n-1}\approx\frac{c}{n}

the last approx. is for nn large.

A network is sparse if ρ0\rho \rightarrow 0 as nn \rightarrow \infty. It is dense otherwise. These definitions make sense mathematically when one has a model for an ensemble of graphs, that can be defined for any nn. For an empirical network, one has to situations:

  • One has empirical data for the network at different values of nn, and so the behaviour as nn increased can be deduced.
  • One has to find an appropriate model to define an ensemble of random graphs for different values of nn, that somehow captures the type of network of the empirical one.

Directed networks

For directed networks one has two types of degree:

in-degree, the number of ingoing edges (sum of a row in adj. matrix)

out-degree, the number of outgoing edges (sum of columns in adj. matrix).

Now the total number of edges mm is:

m=j=1nkjin=j=1nkjout=ijAijm=\sum_{j=1}^n k_j^{\text{in}}=\sum_{j=1}^n k_j^{\text{out}}=\sum_{ij}A_{ij}

as each edge has one ingoing end and one outgoing end. Clearly then the mean degrees are equal: cin=coutc=mnc_{\text{in}}=c_{\text{out}}\equiv c=\frac{m}{n}.

In a weighted network, one defines the strength of a node as the weighted degree:

si=j=1nwijs_i=\sum_{j=1}^{n}w_{ij},

where wijw_{ij} is the weight matrix.

Depletion force

guillefix 2nd July 2016 at 3:44pm

Descriptional complexity

guillefix 13th July 2016 at 8:55pm

What is the shortest description of an object? The size of this description, is the descriptional complexity. This general notion may also be called "structural complexity".

See also Complexity theory, for other notions of complexity.

Kolmogorov complexity

Based on the minimum size of a program (interpreted by a Turing machine) that produces (describes) the object.

Complexity measures based on data compression

Automata-based descriptional complexity

Entropy-based complexity measures

Permutation complexity

Network complexity


YB videos: https://www.youtube.com/watch?v=HWsa_hZ7F3I

Design

guillefix 1st June 2016 at 7:17pm

Design optimization

guillefix 28th February 2016 at 11:29pm

Designing phoretic micro- and nano-swimmers

guillefix 9th June 2016 at 7:36pm

See Self-diffusiophoresis

Small objects can swim by generating around them fields or gradients which in turn induce fluid motion past their surface by phoretic surface effects.

We quantify for arbitrary swimmer shapes and surface patterns, how efficient swimming requires both surface ‘activity’ to generate the fields, and surface ‘phoretic mobility’ (the quantity that determines the direction of the velocity, relative to the driving gradient, which depends on specifics of the solute/surface interactions). We show in particular that

(i) swimming requires symmetry breaking in either or both of the patterns of ‘activity’ and ‘mobility,’ and
(ii) for a given geometrical shape and surface pattern, the swimming velocity is size-independent. In addition, for given available surface properties, our calculation framework provides a guide for optimizing the design of swimmers.

Designing phoretic micro- and nano-swimmers (pdf)

See Self-diffusiophoresis, and Diffusiophoresis for theory

Designs of self-diffusiophoretic particles

Spherical

Janus particle

Saturn particle

Three-slice design

Thin rod

Use slender body theory


Is there a way for particles to actively "fight" their rotational diffusion and make them go straight for longer, without an external field?

Determinant of a graph

guillefix 12th July 2016 at 1:01am

video

See Loop analysis

Derived using Topological trace formula

gives topological polynomial, which is just the characteristic polynomial of the transition matrix

Topological zeta function


Examples from fsts. See notebook and here

FST 10

http://www.wolframalpha.com/input/?i=1-3z%5E2%3D0

Can analyze forward or backward

FST 21

1-2*z+z^2-2*z^3+2*z^4=0

(2^(-log_2(0.59)*40)/2^40)*10^6

-log_2(0.59)

2.7*log_2 (100)/(37-14)

Deterministic finite automaton

guillefix 26th June 2016 at 3:28pm

Developmental biology

guillefix 21st May 2016 at 9:21pm

Differential equations

guillefix 29th March 2016 at 3:33pm

Differential geometry

guillefix 29th May 2016 at 12:36am

Diffusio-osmosis

guillefix 2nd July 2016 at 5:26pm

An Osmotic force caused by concentration gradients.

Diffusio-Osmosis of Electrolyte Solutions in Microscale and Nanoscale

Osmosis is a particular case, in which the diffusio-osmosis drives liquid across a semi-impermeable membrane.

See Diffusiophoresis

Diffusion

guillefix 3rd June 2016 at 12:17am

See Stochastic processes

See Brownian motion for derivations. Also Fick's laws of diffusion

Diffusion equation

Pt=D2P\frac{\partial P}{\partial t} = D \nabla^2 P,

where D is the diffusion coefficient, which when derived from a random walk is

D=Δx22dtD = \frac{\langle \Delta x^2 \rangle}{2dt}

where dd is the dimension of space. The 22 comes from the fact that walker can jump in any of two directions, per dimension. Δx2\langle \Delta x^2 \rangle and tt represent the expected distance squared, and the time step in the random walk, respectively. See for example these notes, for derivation.

See also a simple kinetic derivation of diffusion coefficient (in the context of solid state diffusion), see page 7 Also see Alex's notes on kinetic theory.

Solutions to diffusion equation, using Fourier transform, and using Green functions. Can also derive from Fokker-Planck equation

Solutions to diffusion equation for free, absorbing, and reflecting boundary conditions.

https://en.wikipedia.org/wiki/Diffusion

Applications

Smoluchowski capture rate

Diffusion limit on rate of reaction between molecules.

Begin with an spherical particle, and assume stationary solution, tP=0\partial_t P = 0. Set the concentration to be fixed to CC_\infty far from particle, and 00 on its surface, as we assume they are assumed to be captured.

To do general case where both particles are moving, one should use relative and center of mass coordinates (#trythis). The answer is:

kab=4π(Da+Db)(Ra+Rb)k_{ab}=4\pi(D_a+D_b)(R_a+R_b),

where aa and bb are the particle species, DD and RR are the diffusion constants and radii.

Phoretic mechanisms of colloids

Diffusion-limited aggregation

guillefix 30th April 2016 at 3:35am

Diffusion-Limited Aggregation, a Kinetic Critical Phenomenon

DLA - Diffusion Limited Aggregation

Review

Good notes about surface growth in general.

Similar models:

Eden growth model

random animals

Diffusiophoresis

guillefix 2nd July 2016 at 3:41pm

Diffusiophoresis is the process by which particles move trough a chemical concentration gradient, due to an attractive or repulsive interaction between the particle and the chemical compound. It is a kind of phoretic mechanism of colloids.

Essentially, the surface of the particle, due to Intermolecular forces (or other entropic forces), can be attracted, or repelled to the solvent molecules. If there is a gradient in the concentration of these, they can exert a net force on the particle causing it to gain a certain velocity (and the particle will exert a force on the fluid, causing it to slip over its surface).

In Self-diffusiophoresis (a kind of self-propulsion), the particle itself produces the compound it interacts with.

http://pubs.acs.org/doi/abs/10.1021/la00050a035

https://en.wikipedia.org/wiki/Diffusiophoresis

http://link.springer.com/referenceworkentry/10.1007/978-3-642-27758-0_328-5#page-1

Particle motion driven by solute gradients with application to autonomous motion: continuum and colloidal perspectives

Theory

When the thickness of the interfacial layer is thin compared to the object, the resulting flow is most conveniently described by an effective slip velocity of the liquid past the solid at position rsr_s on the surface, proportional to the local gradient of cc. See a simple derivation below (from Colloid Transport by Interfacial Forces).

(I think they are missing a d on the bottom in (9))

See derivation in the black notebook. Note that at the end you need to use the trick of changing order of integration, as done in Derjaguin's original paper (on page 7).

The general tensorial equation is:

vs(rs)=μ(rs)(Inn)c(rs)\mathbf{v}_s(\mathbf{r}_s) = \mu (\mathbf{r}_s)(\mathbf{I}-\mathbf{n}\mathbf{n}) \cdot \nabla c (\mathbf{r}_s)

μ(rs)\mu (\mathbf{r}_s) is the local surface phoretic mobility, which depends on the particular interaction between the particle and the solven molecules (through the integral in (9)).

Then using the reciprocal theorem (of Low Reynolds number flows), one can find the velocity of the particle, knowing the slip velocity of the solvent around it. In a given basis, the drift velocity of the colloid, V\mathbf{V}, is then:

Vf^i=Sdrsnσivi\mathbf{V}\cdot \hat{\mathbf{f}}_i = - \int \int_S d \mathbf{r}_s \mathbf{n} \cdot \mathbf{\sigma}_i \cdot \mathbf{v}_i

where σi\mathbf{\sigma}_i is the hydrodynamic stress tensor at the surface SS of an object of the same shape dragged by an applied unit force, f^i=e^i\hat{\mathbf{f}}_i = \hat{\mathbf{e}}_i exactly equal to e^i\hat{\mathbf{e}}_i, in the absence of slip.

In the case of spherical particles, the drift velocity turns out to be simply:

V=14πvsdΩ\mathbf{V} = -\frac{1}{4\pi} \int \mathbf{v}_s d\Omega

For uniform motility, vs=μc\mathbf{v}_s = \mu \nabla_{||}c, and so

V=μc\mathbf{V} = -\mu \nabla_{||}c

This simple equation for the velocity is appropriate for the Active colloids used in studying the Self-assembly of active colloids.

See Hydrodynamic slip

In the derivation of equation (9) they assumed that Φ\Phi doesn't depend on xx. Is this the reason why a particle without a gradient in cc, even if it has a gradient in its phoretic mobility is predicted to have 00 drift velocity? Would you get a non-zero velocity if you took dependence on xx of Φ\Phi into account?

See Diffusiophoresis caused by gradients of strongly adsorbing solutes

Giant Amplification of Interfacially Driven Transport by Hydrodynamic Slip: Diffusio-Osmosis and Beyond

Diffusiophoresis: Migration of Colloidal Particles in Gradients of Solute Concentration

Kinetic Phenomena in the boundary layers of liquids 1. the capillary osmosis

Digital art

guillefix 9th July 2016 at 4:52am

Digital circuit

guillefix 23rd May 2016 at 11:12pm

Digital physics

guillefix 24th April 2016 at 1:56am

Dilute magnetic alloy

guillefix 12th July 2016 at 3:46pm

These are materials in which a very small concentration (at most a few percent) of a magnetic element, often iron or manganese, is substituted at random locations inside a nonmagnetic metallic host, such as one of the noble metals (copper, silver, or gold).

At low densities of the magnetic atoms, their resistance, which in normal metals decreases and eventually flattens as the temperature is lowered, starts to rise again at a few degrees above absolute zero. This came to be known as the Kondo effect.

At higher concentrations (already at about 1%), the impurities in dilute magnetic alloys begin interacting, and they were among the first examples of Spin glasses.

Directed percolation

guillefix 16th June 2016 at 8:23pm

Discrete calculus

guillefix 23rd January 2016 at 12:03am

Discrete dynamical system

guillefix 8th July 2016 at 6:21pm

Discrete dynamical systems, a.k.a. maps

See Nonlinear map

See Automata theory, Cellular automata, Boolean network, Dynamical systems on networks..

Graph dynamical system

Great software to explore discrete dynamics: Discrete Dynamics Lab Tools for researching Cellular Automata, Random Boolean Networks, multi-value Discrete Dynamical Networks, and beyond

Discrete dynamical

An Introduction to chaotic dynamical systems. Second edition Chaos theory

Workshop on Combinatorics, Number Theory and Dynamical Systems - Artur Avila

Discrete mathematics

guillefix 29th May 2016 at 12:31am

Discrete memoryless source

guillefix 4th July 2016 at 11:07pm

A discrete memoryless source is an Information source which is:

  • Memoryless. In this context it means that the random variables in the discrete stochastic process making the source are identically and independently distributed
  • Discrete. In this context it means that the alphabet is countable.

Discrete topology

guillefix 14th July 2016 at 3:52am

A topology where every subset of a set XX is open.

Discrete-time Markov chain

guillefix 4th July 2016 at 7:13pm

See Markov chain

Class structure

Communicating classes. Set of states that can communicate with each other (which constitutes an Equivalence relation).


A transition matrix where the whole state space is a communicating class

Stopping time, strong Markov property.

Reccurent vs transient states

Let CC be a communicating class. Then either all states in CC are transient or all are recurrent.

Discussion on finite state complexity and GP map bias

guillefix 26th April 2016 at 6:59pm

See MMathPhys oral presentation

~*Different definition of finite-state complexity here: http://web.mit.edu/cocosci/Papers/complex.pdf (though still not the one we need below)

Using finite-state complexity we can define the complexity of a string produced by a finite transducer. It is effectively the length of the smallest program that describes both a transducer (according to some encoding) and the string itself. This definition is not universal as for Turing machines, because of the non-universality of finite transducers.

This is not what I want, because they don't fix the transducer, while a given GPM would fix the transducer. Assuming we can use the same idea as above, if the length of the input is much larger than the length of the transducer, we are effectively inputing random fixed-length strings to a Turing machine (that we know halt hm..), and by Levins coding theorem (applied here to not asymptotic case..) we expect that "strings with many long descriptions to have a short description too". Furthermore, if we assume that the map is many to one, then each strings would have many long descriptions, so each will have a short description. But if there are many such strings, not all of them can have short descriptions. Thus, the only consistent situation is for a few strings with simple descriptions having many long descriptions too, and a many strings with few long descriptions.

This assumed that the finite transducer is simple (defined by the condition above that the input string to the finite state transducer (FST) be much larger than the FST description). If it isn't, the bias argument above still holds I think, but because the transducer is complex, it's outputs will all be complex, with a complexity dominated by the transducer's.

It seems like the Levin's coding theorem holds for strings of all inputs, which means it works for argument above! However, I don't understand it fully, in particular it's derivation, so I'm not too confident on this. See this book

SEE EMAIL CONVERSATION FOR FOLLOW UP ON THIS. Reasoning above doesn't hold. Kamal's answer:

I read the three papers - thanks for those.

Shallit and Wang (2001) was not super interesting, though obviously relevant in the sense that focus on computable complexities.

Calude (2011) is more interesting. The most interesting result being that a kind of Invariance Theorem holds for finite state transducers (which are weak/weakest type of computation, UTMs being the most powerful because they can compute any algorithm). The Invariance theorm in AIT comes from the fact that any UTM can simulate any other UTM, while their Inv Thm for finite state machines does not invoke this property. Assuming prefix free descriptions of the transducers, this implies a kind of coding theorem for finite state transducers. This is nice because finite state transducers do not have the mystical air that UTMs do (uncomputable). I think it is worth citing this Calude article as a comment, but maybe not making too much of a deal about it.

I also looked Guillermo’s link – just a comment on some reasonsing in there (I know it is just notes):

Furthermore, if we assume that the map is many to one, then each strings would have many long descriptions, so each will have a short [shorter] description. But if there are many such strings, not all of them can have short descriptions[true, but they can all have shorter descriptions]. Thus, the only consistent situation is for a few strings with simple descriptions having many long descriptions too, and a many strings with few long descriptions[hence bias in the map].

The reasoning here is a little rushed – if the map is many to one, then all outputs strings have shorter descriptions. But this does not explain why some outputs have short descriptions and some long (which leads to bias). The central thing to explain in bias is why some outputs will have shorter descriptions than others. The statement "strings with many long descriptions to have a short description too"

does not say anything about how long these short descriptions are, whereas the argument presented assumes that these are short enough to be a problem, in the sense of "not all of them can have short descriptions".

As a trivial but illustrative example, consider the many-to-one map from binary string of length 10 to binary strings of length 5. We can easily construct a uniform distribution for this system. According to the argument above, this system should show bias….but it does not (even though the map is simple).

Disease

guillefix 5th July 2016 at 3:10am

Disordered system

guillefix 13th July 2016 at 3:55pm

Dispersion (Chemistry)

guillefix 11th May 2016 at 2:20pm

A dispersion is a material comprising more than one phase where at least one of the phases consists of finely divided phase domains, often in the colloidal size range, dispersed throughout a continuous phase.

A continuous phase is a phase not interrupted in space.

A dispersed phase is a phase constituted of particles of any size and of any nature dispersed in a continuous phase of a different composition.

The dispersion medium is the matrix for the dispersed phase. The dispersion medium is the continuous phase of the dispersion.

Source from IUPAC: Terminology of polymers and polymerization processes in dispersed systems (IUPAC Recommendations 2011)*.

Depending on the size of the particles in the dispersed phase we have:

  • Solution, for size less than a nanometer.
  • Colloid, for size between a nanometer and a micrometer.
  • Coarse dispersion, for a size larger than a micrometer.

"Dispersion", without adjective, is often used to refer to the colloidal regime.

Dispersion types

For phases with particles of colloidal size or larger. For smaller sizes, see solution.

MediumDispersed medium
GasLiquidSolid
Continuous mediumGasNone (because all gases are mutually miscible)Colloidal: Liquid aerosolColloidal: Solid areosol. Coarse: Dust
LiquidIf dipersed phase has enough concentration: FoamColloidal: EmulsionColloidal: Suspension
SolidPorous solid filled with gas. If dipersed phase has enough concentration: solid FoamPorous solid filled with liquid, like GelsColloidal: Solid sol, like Cranberry glass. Coarse: conglomerates

See https://en.wikipedia.org/wiki/Dispersion_(chemistry) for examples.

Dissipation-fluctuation relation

guillefix 27th April 2016 at 1:28am

distortion_types_liquid_crystal.PNG

guillefix 7th February 2016 at 11:27pm

Distribution of sizes for the small clusters in percolation models

guillefix 11th June 2016 at 6:17pm

A quantity of interesting in Percolation theory is the distribution of sizes for the small clusters in percolation models.

This can be quantified by the total number of clusters of size ss, nsn_s. Sometimes one works with ns/Nn_s/N instead, to eliminate the scaling with NN that would make nsn_s \rightarrow \infty as NN \rightarrow \infty.

One can also work with the probability that a random node belongs to a cluster of size ss, which can be easily seen to be πs=snsN=# of nodes in clusters of size stotal # of nodes\pi_s = \frac{s n_s}{N} = \frac{\#\text{ of nodes in clusters of size }s}{\text{total }\#\text{ of nodes}}. This is clearly the probability of picking a node inside a cluster of size ss given a particular network configuration. In the case of Percolation on random graphs and networks, it's also the probability that a random network configuration (following the appropriate probability distribution defining the network ensemble) makes a particular chosen node be in a cluster of size ss. This is because the two operations are statistically independent.

πs\pi_s can be shown to decrease exponentially with s in the subcritical regime, and it decays more slowly in the supercritical regime. (see here). At the critical point, the cluster size follows a power law distribution (as do for instance avalanche sizes in the sandpile model at criticality).

DNA

guillefix 2nd July 2016 at 1:57pm

Desoxyribonucleic acid

See DNA nanotechnology, MMathPhys oral presentation

https://en.wikipedia.org/wiki/DNA

Structure of DNA

See DNA nanotechnology

The Shape of DNA - Numberphile. DNA is a right-handed Helix.

Both strands are right-handed (almost always in biology) (they have to have the same handedness, as can be seen by looking at a cross-section and seeing the cross-sections of the strands, if they were of opposite handedness, they would collide (given the helices have the same radii)). The two strands of DNA also have a direction associated with them (the backbone determines it), and the strands are antiparalel, as seen in animation below

See also Chirality in biology

Can model DNA as a ribbon, and can define it's torsion. Boundaries of ribbon are the backbone, and they form the same surface that you get by twisting a normal ribbon.

Can also coarse-grain more, and model it as a curve.

Packing of DNA in a cell. See here

How DNA unties its own knots - Numberphile, using Type II topoisomerase (see more here). Drugs that target type II topoisomerase are used as antibiotics, because this enzyme is necessary for the cell to replicate correctly in bacteria. This is because the fact that DNA is a helix, and it forms a loop in bacteria means that when DNA is unzipped by helicase, the two single strand loops are interlinked. Topoisomerase then cuts and stiches DNA in such a way as to unlink them. See DNA replication

Processes of DNA

DNA replication

Chemistry of DNA

https://www.technologyreview.com/s/419590/quantum-entanglement-holds-dna-together-say-physicists/

Information in DNA

Genetics

Second layer of information in DNA confirmed

DNA computing

guillefix 29th June 2016 at 6:41pm

DNA nanoengineering

guillefix 29th June 2016 at 6:40pm

DNA nanomachines

guillefix 29th June 2016 at 7:02pm

Rapid chiral assembly of rigid DNA building blocks for molecular nanofabrication Practical components for three-dimensional molecular nanofabrication must be simple to produce, stereopure, rigid, and adaptable. We report a family of DNA tetrahedra, less than 10 nanometers on a side, that can self-assemble in seconds with near-quantitative yield of one diastereomer. They can be connected by programmable DNA linkers. Their triangulated architecture confers structural stability; by compressing a DNA tetrahedron with an atomic force microscope, we have measured the axial compressibility of DNA and observed the buckling of the double helix under high loads.

Molecular Machinery from DNA: Synthetic Biology from the Bottom up

DNA nanomachines

Programmable DNA Nanosystem for Molecular Interrogation an embedded Förster Resonance Energy Transfer (FRET) system, in which one cyanine 3 (cy3) molecule is positioned on the frame and one cyanine 5 (cy5) molecule is on the ring, reports the relative position of the ring under various conditions

Hybrid, multiplexed, functional DNA nanotechnology for bioanalysis

Programmable motion of DNA origami mechanisms

Reversible Reconfiguration of DNA Origami Nanochambers Monitored by Single-Molecule FRET

Universal computing by DNA origami robots in a living animal (see also DNA computing).

Controlled Release of Encapsulated Cargo from a DNA Icosahedron using a Chemical Trigger

Mechanical design of DNA nanostructures

DNA Scissors Device Used to Measure MutS Binding to DNA Mis-pairs

Nanomechanical DNA origami 'single-molecule beacons' directly imaged by atomic force microscopy

A DNA-fuelled molecular machine made of DNA

Construction of a 4 Zeptoliters Switchable 3D DNA Box Origami

Molecular Engineering of DNA: Molecular Beacons

See also Atomically precise manufacturing

DNA nanotechnology

guillefix 29th June 2016 at 6:53pm

I think this may be the article Turberfield mentioned: http://www.nature.com/nature/journal/v525/n7567/full/nature14860.html

also this: http://www.nature.com/nnano/journal/v10/n9/full/nnano.2015.204.html

This talks about 3D scafolded dna origami: http://www.nature.com/nmeth/journal/v8/n3/full/nmeth.1570.html


Nature - DNA nanotechnology

Structural DNA Nanotechnology: State of the Art and Future Perspective

Challenges and opportunities for structural DNA nanotechnology

DNA nanotechnology from the test tube to the cell

Methods and techniques

DNA origami

William Shih (Harvard) Part 1: Nanofabrication via DNA Origami

DNA Origami with Complex Curvatures in Three-Dimensional Space

DNA bricks/tiles

Building with DNA bricks

Complex shapes self-assembled from single-stranded DNA tiles

DNA brick crystals with prescribed depths

Polyhedra Self-Assembled from DNA Tripods and Characterized with 3D DNA-PAINT

Three-Dimensional Structures Self-Assembled from DNA Bricks

LEGO-like DNA Structures

Other DNA self-assembly techniques and reviews

Rational design of self-assembly pathways for complex multicomponent structures

Folding DNA to create nanoscale shapes and patterns (2006, Rothemund).

Complex DNA Nanostructures from Oligonucleotide Ensembles

Placement and orientation of individual DNA shapes on lithographically patterned surfaces

Self-assembly of DNA into nanoscale three-dimensional shapes

DNA CAD

Computer-Aided Design of DNA Origami Structures

Computer-assisted design for scaling up systems based on DNA reaction networks



Applications and engineering

DNA nanostructures: a shift from assembly to applications

DNA nanoengineering

DNA nanomachines

DNA computing

Single-molecule analysis

Regulation of DNA Methylation Using Different Tensions of Double Strands Constructed in a Defined DNA Nanostructure

Single-Molecule Mechanochemical Sensing Using DNA Origami Nanostructures

Nanomedicine


Order custom DNA origami parts!

DNA replication

guillefix 28th June 2016 at 8:19pm

Replication of DNA is a step in Mitosis

  1. Helicase unzips DNA, and forms a replication fork.
  2. Primase makes a small piece of RNA called a primer
  3. DNA polymerase enzimes binds to the primer and will make the new strand of DNA. The leading strand is replicated continuously, while the lagging strand is replicated in steps, forming Okazaki fragments.
  4. Exonuclease removes all the RNA primers from both strands of DNA
  5. DNA polymerase enzymes then fill the gaps left behind in DNA.
  6. DNA ligase seals up the fragments of DNA

Animation below is missing the steps 4 and 5:

dna_self_assembly.png

guillefix 12th February 2016 at 12:02am

Door

guillefix 5th July 2016 at 4:12am

A door is a movable structure used to block off, and allow access to an enclosed space, such as a Building or Vehicle.

Dots and boxes

guillefix 13th June 2016 at 7:56pm

Draft 2 of 'New Tiddler 1'

guillefix 28th January 2016 at 6:49pm

Draft of 'Algorithmic complexity'

guillefix 8th April 2016 at 4:44pm

Draft of 'Cloud computing'

guillefix 7th May 2016 at 1:35am

Draft of 'MMathPhys Miniprojects'

guillefix 16th March 2016 at 8:00pm

Nonlinear systems

The effects of small damping, nonlinearity and forcing on a harmonic oscillator:

x¨+βx˙+x+δx3=Γcosωt\ddot{x} + \beta \dot{x} + x + \delta x^3 = \Gamma \cos{\omega t}

  • The simple harmonic oscillator (forced and damped, in general)
  • Duffing oscillator
    • Free (unforced) Duffing oscillator
      • Free undamped Duffing oscillator
      • Free damped Duffing oscillator
    • Forced damped Duffing oscillator.

There are potentially 88 qualitatively different forms of the equation, depending of which combination of the 33 parameters considered are non-zero.

The Duffing Equation: Nonlinear Oscillators and their Behaviour

More papers and references:

https://en.wikipedia.org/wiki/Intermittency

https://en.wikipedia.org/wiki/Crisis_%28dynamical_systems%29

Y. Ueda, Steady Motions Exhibited by Duffing’s Equation: A Picture Book of Regular And Chaotic Motions

[[Catastrophes with Indeterminate Outcome Stewart, H. B. ; Ueda, Y.|http://ezproxy-prd.bodleian.ox.ac.uk:2084/stable/51909?seq=1#page_scan_tab_contents]]

EXPLOSION OF STRANGE ATTRACTORS EXHIBITED BY DUFFING'S EQUATION - Yoshisuke Ueda

Common dynamical features on periodically driven strictly dissipative oscillators (introduces torsion and winding numbers)

Comparison of bifurcation sets of driven strictly dissipative oscillators

Wada basins

https://en.wikipedia.org/wiki/Lakes_of_Wada

Wada basin boundaries and basin cells Other link

Unpredictable behavior in the Duffing oscillator: Wada basins

Testing for Basins of Wada

Response Of A Harmonically Excited Hard Duffing Oscillator – Numerical And Experimental Investigation

[[Experimental investigation of the response of a harmonically excited hard Duffing oscillator|http://www.ias.ac.in/article/fulltext/pram/068/01/0099-0104]] From here

Analytical methods

Exact analytical solutions for forced cubic restoring force oscillator Uses Jacobi elliptic function (only for undamped Ueda oscillator I think).

A comparison of classical and high dimensional harmonic balance approaches for a Duffing oscillator

Second order averaging and bifurcations to subharmonics in duffing's equation

Subharmonic Oscillations in Nonlinear Systems

Chaotic states and routes to chaos in the forced pendulum

Organization of periodic orbits in the driven Duffing oscillator

Structure in the bifurcation diagram of the Duffing oscillator

superstructure in the bifurcation set of the duffing equation

General case of crisis-induced intermittency in the Duffing equation for double-well Duffing oscillator.

On the jump-up and jump-down frequencies of the Duffing oscillator

More books:

Chaos in Nonlinear Oscillators: Controlling and Synchronization By M Lakshmanan, K Murali

Antimonotonicity reversal of period-doubling cascades


Networks

Spatial networks

Draft of 'New Tiddler 1'

guillefix 28th January 2016 at 6:20pm

Draft of 'Unconventional computing'

guillefix 24th June 2016 at 3:04am

Draft of 'Virtual reality'

guillefix 21st April 2016 at 12:18am

drift.jpg

Driven matter

guillefix 3rd June 2016 at 12:12am

Driven matter refers to a type of bulk matter, often soft condensed matter, to which energy is being applied in a way that significantly affects some of its degrees of freedom. It is thus a driven system, in the sense of Control theory and control systems. It is closely related to Active matter.

Drone

guillefix 12th July 2016 at 12:57am

Drugs

guillefix 8th April 2016 at 5:22pm

Alcohol
Cannabis
Benzodiazepines
Cocaine
Drug related anxiety
Drug related infections
Drug related mood
Drug related personality
etc.

Dry active matter

guillefix 13th July 2016 at 3:51pm

Duffing oscillator

guillefix 14th March 2016 at 4:56pm

Duffing oscillator is a nonlinear oscillator.

x¨+βx˙+x+δx3=Γcosωt\ddot{x} + \beta \dot{x} + x + \delta x^3 = \Gamma \cos{\omega t}

Physical meaning

The oscillator corresponds to a nonlinear spring with either hardening for δ>0\delta > 0 or softening for δ<0\delta < 0 (for amplitude not too large, as then it's motion becomes unbounded).

Free (unforced) Duffing oscillator

Free undamped Duffing oscillator

Γ=0\Gamma = 0β=0\beta = 0

The system can be integrated to obtain an energy, and the system is then a Hamiltonian system:

E(t)12x˙2+12x2+14δx4=constE(t) \equiv \frac{1}{2} \dot{x}^2 + \frac{1}{2} x^2 + \frac{1}{4} \delta x^4 = \text{const}

Free damped Duffing oscillator

When β>0\beta > 0 , E(t)E(t) satisfies:

dE(t)dt=βx˙20\frac{d E(t)}{dt} = - \beta \dot{x}^2 \leq 0

One can easily show that this is indeed a Lyapunov function and the origin is globally asymptotically stable

Forced Duffing oscillator

More interesting. Nonlinear resonances. Shows chaotic behaviour, intermittency (jump phenomena), etc. See Lakes of Wada..

Nonlinear resonances

Treat with multiple scales method

Primary resonance

Secondary resonances

Subharmonic

Superharmonic

Onset of chaos

Period-doubling cascade

Reverse period doubling and reverse cascade (bubbles)

Intermittency

Lakes of Wada

Other?

Dynamic programming

guillefix 30th June 2016 at 1:37am

https://en.wikipedia.org/wiki/Dynamic_programming

  • Overlapping subproblems -> memorization: record value 1st time it's computed, then look it up subsequently. Table lookup
  • Optimal substructure: global optimal solution can be constructed from optimal solutions to sub-problems.

Dynamic statistical encoding

guillefix 28th June 2016 at 4:33am

Dynamical Instability in Boolean Networks as a percolation Problem

guillefix 15th June 2016 at 5:19pm

Dynamical Instability in Boolean Networks as a Percolation Problem pdf

Phase Transitions in Complex Network Dynamics

A connection between the percolation transition and the onset of chaos in the Kauffman model

Percolation and spreading of damage in a simplified Kauffman model

Activities and Sensitivities in Boolean Network Models

Core Percolation and Onset of Complexity in Boolean Networks

Annealed approximation: Random Networks of Automata: A Simple Annealed Approximation

Boolean functions in Boolean networks are represented by a truth table, that in turn can be represented by a 2K2^K-length vector/string of 00s and 11s, for a KK-input truth table. 2K2^K is the number of possible inputs, i.e. the cardinality of the set {1,0}K\{1,0\}^K. The bit string can be interpreted as a binary decision tree.

Activities and Sensitivities in Boolean Network Models

The average sensitivity (when averaged over all the functions in the network) appears to be a good parameter for predicting whether the dynamics of the Boolean network are ordered or chaotic

Activities and Sensitivities in Boolean Network Models

Random Boolean networks: Analogy with percolation (Stauffer)

Some interesting analogies, investigated via computer simulations, between percolation and properties of Kauffman Boolean networks in a 2D lattice

Random Boolean networks: Analogy with percolation


Connection between sensitivity and complexity of GP map of Boolean networks.. MMathPhys oral presentation

Relation between Kolmogorov complexity and sensitivity of a Boolean function.

Sensitivity <> constrained/unconstrained, coding/non-coding, etc.


More references:

A geometrical interpretation of the chaotic state of inhomogeneous deterministic cellular automata The role of certain Post classes in Boolean network models of genetic networks Boolean Dynamics with Random Couplings Isomorphism of Quasispecies and Percolation Models Spectral theory for the robustness and dynamical properties of complex networks Phase Transitions in Two-Dimensional Kauffman Cellular Automata Phase transition in cellular random Boolean nets The Physics of Structure Formation: Theory and Simulation

Dynamical system

guillefix 8th July 2016 at 5:30pm

How things move

A space (in the mathematical sense, for a continuous space, one often uses a Manifold, or a Topological space), with a Function (a.k.a. a map) that describes how a point in the space evolves (in "time").

Types of dynamical systems

Measure-theoretical dynamical system

Topological dynamical system

Continuous dynamical system are dynamical systems where the space is continuous. It is often represented as a system of 1st order O.D.Es. Linear dynamical systems (O.D.E.s linear) are easy to analyze, and can be analyzed by looking at the eigenvalues of the Jacobian.

Discrete dynamical system are those where the space is discrete. They are often represented as systems of difference equations (see Nonlinear maps).

Measure-theoretical dynamical system

Topological dynamical system

The richest class of dynamical systems are Nonlinear systems

A dynamical system, whether continuous or discrete, can be partitioned (coarse-grained), so that its dynamics can be studied as Symbolic dynamics. If the system is a Probabilistic dynamical system, then the coarse-graining gives rise to a stochastic process

Deterministic vs probabilistic dynamics

Dynamical systems generally describe deterministic processes. Probabilistic processes are described as Stochastic processes. However, these can sometimes be described as deterministic dynamics of probability distributions, or as a probability measure over a deterministic process (i.e. a Probabilistic dynamical system).


See Wiki page for good intro and different kinds

Encyclopedia:Dynamical systems

Dynamical systems on complex space (particularly discrete ones): Complex dynamics

Nonlinear Dynamics 1: Geometry of Chaos by Predrag Cvitanović (ChaosBook course)

https://en.wikipedia.org/wiki/Floquet_theory

Turing instability


Dynamical systems on networks

guillefix 16th June 2016 at 8:20pm

Dynamics of Boolean networks

guillefix 24th June 2016 at 1:22am

See Boolean network

See Dynamical Instability in Boolean Networks as a percolation Problem

Dynamics of Boolean Networks

Dynamics of Boolean Networks: An Exact Solution

Influence and Dynamic Behavior in Random Boolean Networks

Dynamics of Complex Systems: Scaling Laws for the Period of Boolean Networks. Relation between the (expected) period of a RBN and the number of nodes NN. Using some numerical and analytical results, they find a power law relation.

What Darwin didn't know: natural variation is structured GP map bias in Boolean networks (see MMathPhys oral presentation)

Guiding the self-organization of random Boolean networks (RBN). Quote from article: It is useless to enter an ontological discussion on self-organization. Rather, the question is: when is it useful to describe a system as self-organizing? [...] A model cannot be judged independently of the context where it is used. I've always agreed with this philosophy. Things like self-organizing or complex are perspectives on systems, not hard classifications schemes.

Can explore RBNs with RBNLab

Since RBNs are finite (they have 2 N possible states) and deterministic, eventually a state will be revisited. Then, the network will have reached an attractor. The number of states in an attractor determines the period of the attractor.

Point attractors have period one (a single state), while cyclic attractors have periods greater than one (multiple states, e.g., four in Fig. 2)

Figure 2.

A RBN can have one or more attractors. The set of states visited until an attractor is reached is called a transient. The set of states leading to an attractor form its basin.

The basins of different attractors divide the state space. RBNs are dissipative, i.e., many states can flow into a single state (one state can have several predecessors), but from one state the transition is deterministic toward a single state (one state can have only one successor).

The number of predecessors is also called in-degree. States without a predecessor are called “Garden of Eden” (GoE) states (in-degree = 0), since they can only be reached from an initial condition. Figure 3 illustrates the concepts presented above.

Fig. 3 Example of state transitions. B is a successor state of A and a predecessor of C. States can have many predecessors (e.g., B), but only one successor. G is a Garden of Eden state since it has no predecessors. The attractor C→D→E→F→CC→D→E→F→C has a period four

One of the main topics of RBN research is to understand how changes in the topological network (lower scale) affect the state network (dynamics of higher scale), which is far from obvious.

RBNs are generalizations of Boolean Cellular automata (von Neumann 1966; Wolfram 1986, 2002), where the states of cells are determined by K neighbors, i.e., not chosen randomly, and all cells are updated using the same Boolean function

~ ~ ~

The self-organization of RBNs can also be interpreted in terms of complexity reduction. For example, the human genome has approximately 25,000 genes. Thus, in principle, each cell could be in one of the 225,0002^{25,000} possible states of that network. This is much more than the estimated number of elementary particles arising from the Big Bang. However, there are only about 300 cell types (attractors (Kauffman 1993; Huang and Ingber 2000)), i.e., cells self-organize toward a very limited fraction of all possible states.

There are several regimes. In the critical regime near in-degree (in topological network) 2: Few nodes have many predecessors, while many nodes have few predecessors. Actually, the in-degree distribution (in state network, I think) approximates a power law (Wuensche 1998).

Dynamics of spin glasses

guillefix 12th July 2016 at 4:18pm

non-equilibrium dynamical properties of Spin glasses.

"Remanence" behaviour

As we’ve already seen (and discuss more fully in section 4.8), a spin glass in the absence of a magnetic field has zero magnetization. But it shouldn’t be surprising that when placed inside a uniform magnetic field, the atomic magnetic moments will try to orient themselves along the field—as occurs in any magnetic system—resulting in a net magnetization. So far not very exciting; but what then happens after the field is removed or altered?

There are any number of ways in which this can be done, and in the spin glass they all lead to somewhat different outcomes.

One approach is to cool the spin glass in a uniform magnetic field H from a temperature above T f to one well below, and then remove the field. On doing so, the spin glass at first retains a residual internal magnetization, called the thermoremanent magnetization. The thermoremanent magnetization decays with time, but so slowly that it remains substantial on experimental timescales.

Another procedure is to cool the spin glass below T f in zero field, turn on a field after the cooling stops, and after some time remove the field. This gives rise to the isothermal remanent magnetization.

Memory effects

In the simplest of these, a spin glass is cooled to a temperature below T f in an external magnetic field, often through a deep thermal quench. The spin glass then sits at that fixed field and temperature for a certain “waiting time” twt_w . After the waiting time has elapsed, the field is switched off and the decay of the thermoremanent magnetization is measured at constant temperature. Interestingly, the spin glass “remembers” the waiting time: a change in the rate of decay occurs at a time roughly twt_w after the field is removed. Aging is not confined to spin glasses, but their unusual behaviors make them somewhat special.

Theory of non-equilibrium behaviour of spin glasses

all share the features of a wide range of relaxational processes, leading to a broad distribution of intrinsic relaxation times; a significant amount of metastability, meaning that most relaxations, whether involving a small or large number of spins, can only occur after the system surmounts some energy or free energy barrier; and a consequently compli- cated “energy landscape,” the meaning of which is discussed in section 4.9.

e-Governance

guillefix 1st April 2016 at 10:52pm

Earth science

guillefix 8th July 2016 at 3:17am

The planet Earth as a system. Also known as geoscience.

Hydrology

Topography

Climate

Geology

Ecology

guillefix 28th June 2016 at 4:37pm

Ecology (from Greek: οἶκος, "house", or "environment"; -λογία, "study of" [A]) is the scientific analysis and study of interactions among organisms and their environment.

..Related to environmental studies.


"It's mainly because people haven't been cutting down nearly as much wood for fuel, plus there have been concerted efforts to manage and regrow forests. Also some of the areas have been regrowing after the World Wars made them less suitable for farmland." ~Laurie

Economic innovation

guillefix 27th April 2016 at 6:26pm

Economics

guillefix 3rd July 2016 at 6:01pm

Economics is the social science that describes the factors that determine the production, distribution and consumption of goods and services. It also includes the methods used for the purposeful Engineering of such processes, in a complex Society.

An https://en.wikipedia.org/wiki/Economy (Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location.

Macroeconomics

Microeconomics


Economic and product cycle

Diagram of a product cycle (showing the main phases in the life of a product).

The economic cycle involves the product cycle, plus steps that control the product cycle, which involves systems like markets.

Raw material extraction

Production of goods

Industry, the production of goods, often by processing raw materials (Manufacturing)

Distribution of goods

Transport, Trading, markets

Consumptions of goods

Demand, use, Culture, trends, necessity, Psychology

Disposal and recycling of goods

Disposal, Recycling


Economic sector

  • Primary (raw materials)
  • Secondary (manufacturing and processing)
  • Tertiary (services)

Quaternary: Data & Knowledge, information services

Quinary sector: human services

Economic development correlated with an increase in the complexity of the economic activity.


See also Resource management.

Tax heavens

https://panamapapers.icij.org/the_power_players/

https://en.wikipedia.org/wiki/Grundrisse

http://motherboard.vice.com/read/the-future-of-robot-labour-has-everything-to-do-with-capitalism

Education

guillefix 28th June 2016 at 4:35pm

Effects in evolution

guillefix 23rd June 2016 at 10:15pm

Effects of bias in GP maps

guillefix 26th April 2016 at 6:56pm

See MMathPhys oral presentation

Arrival of the frequent

The Arrival of the Frequent: How Bias in Genotype-Phenotype Maps Can Steer Populations to Local Optima See notes at Arrival of the frequent.

The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA. See notes

Probabilistic bias in genotype-phenotype maps. See more here: http://dingleresearch.weebly.com/publications.html

Self-assembling polyominoes model: A tractable genotype–phenotype map modelling the self-assembly of protein quaternary structure

More.... Modeling the evolution of molecular systems from a mechanistic perspective Adaptive dynamics under development-based genotype–phenotype maps Why self-incompatibility in the Brassicaceae is totally cool

Robustness and evolvability

3. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype–phenotype maps. Features observed in several GP maps (including the simple Fibonacci GP map they use as a model):

Common features of GP maps

Genetic correlations greatly increase mutational robustness and can both reduce and enhance evolvability

random null model: that maintains the number of genotypes mapping to each phenotype, but assigns genotypes randomly

Genetic correlations

neutral correlations can be quantified by the robustness to mutations, which can be many orders of magnitude larger than that of the null model, and crucially, above the critical threshold for the formation of large neutral networks of mutationally connected genotypes which enhance the capacity for the exploration of phenotypic novelty. Thus neutral correlations increase evolvability.

non-neutral correlations: Compared to the null model:

i) If a particular (non-neutral) phenotype is found once in the 1-mutation neighbourhood of a genotype, then the chance of finding that phenotype multiple times in this neighbourhood is larger than expected;
ii) If two genotypes are connected by a single neutral mutation, then their respective non-neutral 1-mutation neighbourhoods are more likely to be similar;
iii) If a genotype maps to a folding or self-assembling phenotype, then its non-neutral neighbours are less likely to be a potentially deleterious non-folding or non-assembling phenotype.

Non-neutral correlations of type i) and ii) reduce the rate at which new phenotypes can be found by neutral exploration, and so may diminish evolvability, while non-neutral correlations of type iii) may instead facilitate evolutionary exploration and so increase evolvability.

Examples of GP map bias

suggesting that some of the results discussed in this paper for RNA may hold more widely in biology

See also Evolving automata

Paper with several examples of GP maps, including cellular automata map: An investigation of redundant genotype-phenotype mappings and their role in evolutionary search

Eigenvector centrality

guillefix 27th May 2016 at 10:30pm

See Measures and metrics for networks

The eigenvector centrality (first defined by Bonacich in 1987), is defined by:

Ax=κ1x\mathbf{A}\mathbf{x}=\kappa_1 \mathbf{x}

where x\mathbf{x} is the vector of centralities, and κ1\kappa_1 is the largest eigenvalue of A\mathbf{A}. The reason we choose the largest eigenvalue is that this measure can be obtained by starting from any arbitrary centrality measure x0\mathbf{x_0} and getting new centrality measures by requiring that they be equal, for each node, to the sum of centralities of its neighbours, then the centrality corresponding with the eigenvector with the largest eigenvalue emerges exponentially over the others, and in the limit, we get the centrality defined above (up to normalization).

The centrality, then, has the property that it is equal to the sum over centralities of neighbours for each node ii:

xi=κ11jAijxjx_i = \kappa_1^{-1}\sum_j A_{ij} x_j .....Eq. 1

so that a node can be important because it is connected to many nodes, or because it is connected to important nodes, or both.

Eigenvector centrality has problems for directed networks because defined in the natural way, only vertices in strongly connected components (or their out-components) will have non-zero eigenvector centrality. This is because the map described by Eq.1 passes centrality along edges in the direction they point, so the in-component will "loose" all its centrality in the long time limit.

Katz centrality addresses these problems

~ Need GG strongly connected for a directed network.


Perron-Frobenius theorem

This theorem is related to ergodicity of the map defined by the recursive relation used to define eigenvector centrality [write it here].

[Look at theorem stuff in Newman books, specially relevant footnotes].

Ensures centralities are positive.

Einstein–Smoluchowski relation

guillefix 27th April 2016 at 1:28am

Electric motor

guillefix 26th June 2016 at 1:52am

Electrical engineering

guillefix 28th June 2016 at 3:59pm

Electrokinetic effects in catalytic conductor-insulator Janus swimmers

guillefix 17th June 2016 at 6:17pm

Self-propelled particle, Self-electrophoresis, Catalytic conductor-insulator Janus swimmer

Electrokinetic effects in catalytic platinum-insulator Janus swimmers

"Pt-insulator Janus particles, the absence of conduction between the two hemispheres suggests a mechanism indepen- dent of electrokinetics." (referring to the mechanisms that involve movement of electrons in bimetallic swimmers, see Self-electrophoresis). Thus Self-diffusiophoresis was suggested. However, as they show in that paper, some electrokinetic effects can still play a role in the Pt-insulator Janus particles.

"We find that their motion is due to a combination of neutral and ionic diffusiophoretic as well as electrophoretic effects whose interplay can be changed by varying the ionic properties of the fluid. "

One of their main findings is that a gradient of catalyst is required to produce appreciable propulsion velocity for single metal catalytic swimmers.

Main mechanisms of the electrokinetic effect

To see the main mechanism of the effect they discover (the mathematical derivation is outlined in the paper), notice that at the pole the catalytic reaction happens faster, and so there is a higher or lower concentration of H+H^+ ee^- pairs depending on whether the reaction is mostly consuming or producing them (see reaction diagram). Notice that the electrons (ee^-) diffuse much faster inside the Pt metal, so that they spread through the Pt hemisphere, while the proton ions (H+H^+) diffuse much slower. Note that the electrons will diffuse in such a way that the tangential component in the metal is 00. This distribution of charges creates an electric field, that drives the ions in the fluid, propelling the Janus sphere. In the case in the paper, I think the place where the reaction happens faster (near the pole) also consumes H+H^+ faster, so there is a depletion of H+H^+ there, and a relatively higher concentration near the equator. There is thus a net electric field that pushes the protons from the equator to the pole (i.e. they push each other). They drag the fluid with them too, so that the particle propels itself by this self-electrophoretic mechanism.

See also Ion Drive for Vesicles and Cells

See Colloid Transport by Interfacial Forces for matched asymptotic analysis of fluid flow. And see paper for chemical reaction kinetic and diffusion equations.

Why does the double loop topology mean we can reduce overall catalytic reaction rate without significant reduction of colloid velocity?

Electromagnetism

guillefix 26th June 2016 at 1:52am

Electronic circuit

guillefix 28th June 2016 at 3:55pm

Electronic engineering

guillefix 28th June 2016 at 3:59pm

Electronics

guillefix 23rd May 2016 at 11:15pm

The Art of electronics (3rd edition)

Electrical network

Electrical circuit

Electrophoresis

guillefix 17th June 2016 at 6:27pm

Electrostatics

guillefix 17th June 2016 at 6:26pm

Emotion

guillefix 17th May 2016 at 1:11am

See this post: https://www.facebook.com/groups/hedonistic.imperative/permalink/10152547241106965/ and movie Phenomenon (1996) Dave says: "it is not emotions we need to control but behaviour. We do not learn from emotions by curbing and suppressing them but by fully experiencing them. "

When Emotions Make Better Decisions - Antonio Damasio

Hm, it seems like emotion is our Q function in Reinforcement learning. It is kind of a summary of wisdom from past experiencies. Hm this is interesting.. If we are guided by emotions too much then our Q function will learn by trying to amplify the positive emotions it encodes, this may produce a positive feedback loop, which sounds like addiction to me. If however, we ignore emotions too much, we are not making use of this awesome machine learning algorithm we have built in in our brain, and may get stalled in philosophical analysis too often in life, by trying to logically deduce everything.

In fact, modern Artificial intelligence trends seem to show that deep learning, and heuristics based learning are more powerful than the older symbolic/logic approach to AI. However, judging from how our brain works, it appears that the optimal combination may be a combination of the two, using one or the other as appropriate!

Antonio Damasio's research in neuroscience has shown that emotions play a central role in social cognition and decision-making"

This seems to be related to thinking fast & slow (Read that book!), and also how AIs now seem think more intuitively (so maybe in a sense they have some level of emotion now!).

See this to see how these considerations of thinking fast & slow, heuristically vs deductively, relates to utilitarian ethics issues: Facing the unknown: the future of humanity - Nick Bostrom


Wiki: Emotion

Not sure. Hm, of course, this is just a fuzzy representation, but I think I would swap the terror and amazement branch. It'd be interesting to see the logic behind this better though.

Emulsion

guillefix 9th May 2016 at 8:53pm

Emulsion is a colloidal mixture of two or more liquids that are normally immiscible (unmixable or unblendable). The two liquid form to coexisting phases.

Examples:

Energy

guillefix 8th July 2016 at 1:34am

Energy innovation

guillefix 7th May 2016 at 2:01am

http://www.futureearth.org/

Rocky mountain institute

Solar energy

Perovskite solar cell

Energy production

guillefix 7th May 2016 at 2:01am

Producing energy doesn't mean creating it from nothing, as that would violate the principle of conservation of energy, in Physics.

Energy production thus refers to converting energy from one form (often an storage form) to another form, which is useful (to do mechanical work often).

Fuel-based energy production

Renewable energy production

Technically using the Sun as energy source/fuel.

Hydro-electric plant

Wind power

Solar power

Energy transduction

guillefix 4th May 2016 at 8:45pm

Converting energy from one form to another form

Engineering

guillefix 17th May 2016 at 1:42am

See Technology & Engineering

Here we include applied sciences as part of engineering

Portal:Engineering

Problem-solving strategies

How to solve it by Polya

TRIZ - Theory of inventive problem solving. Apparently used by Samsung

Free MIT books: https://archive.org/details/mitlibraries

Entertainment

guillefix 17th May 2016 at 1:32am

Entity

guillefix 8th July 2016 at 3:14am

Any thing.

Entropy

guillefix 3rd July 2016 at 1:58pm

The entropy, H(X)H(X), of a Random variable, XX, is defined as

H(X)=xp(x)logp(x)H(X) = - \sum_x p(x) \log{p(x)}

video

Entropy rate

guillefix 1st July 2016 at 7:21pm

The entropy rate of an information source (see Data transmission) is the average entropy of a letter of the source.

An information source is often modelled as a discrete-time stochastic process {Xk}\{X_k\}, where each XkX_k is called a "letter". The entropy rate is then defined as:

HX=limn1nH(X1,X2,,Xn)H_X = \lim_{n\rightarrow \infty} \frac{1}{n} H(X_1, X_2, \cdots, X_n)

when the limit exits (see also Shannon-McMillan-Breiman theorem).

Chapter 2 Information Measures - Section 2.10 Entropy Rate of a Stationary Source

One can define a related measure, HXH_X, by using conditional entropies. It can be shown that, for an stationary Information source, the entropy rate exists and is equal to HXH_X.

Entropy rate of a finite state process

guillefix 3rd July 2016 at 5:13am

Entropy reduction

guillefix 5th July 2016 at 1:06am

Ordering in sequence spaces

A mathematical theory of ordering (with constraints) in sequence spaces was first presented in [7] and [1]. In their setup, an algorithm is sought which “orders” any sequence of length n, i.e., which transforms the sequence x⃗ into the sequence y⃗ (of the same length and with the same symbols in it), such that the number of possible resulting sequences y⃗ is as small as possible. In this sense ordering is a generalization of sorting x⃗ , as this would yield the absolute minimal number of sequences y⃗ .

Ordering in Sequence Spaces: An Overview

Creating order in sequence spaces with simple machines

Entropy reduction, ordering in sequence spaces, and semigroupss of non-negative matrices see here

Creating Order and Ballot Sequences

Entropy-based complexity measures

guillefix 7th July 2016 at 7:17pm

See Descriptional complexity

Entropy rate

Often defined for a (probabilistic) Information source.

Here they define a (non-standard) notion of entropy for a specific sequence.

Topological entropy

Topological entropy of a string (symbol sequence)

Defined here

Metric entropy

measure-theoretical or Kolmogorov-Sinai entropy


See Entropy and complexity of finite sequences as fluctuating quantities

Enzyme

guillefix 22nd April 2016 at 11:58pm

https://en.wikipedia.org/wiki/Enzyme

are macromolecular biological catalysts

Enzyme kinetics

guillefix 9th June 2016 at 6:27pm

Michaelis-Menten rule

Derived from kinetic rate equations for a simple catalytic reaction. The rate (per unit volume) of catalysis at equilibrium is:

keff=k2CSCECS+KMk_{\text{eff}} = \frac{k_2 C_S C_E}{C_S+K_M}

Derivation

Epidemics on networks

guillefix 2nd June 2016 at 2:12am

Keywords: Network science, Epidemiology

Cascades on Networks. There Watt's cascade model is described, among other things.

See Mason and Gleeson "Dynamical systems on networks"

Types of contagions

For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses)

For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics

See also wiki page: Complex contagion

Epidemiology

guillefix 2nd June 2016 at 1:51am

There are many Epidemic model. Some use simple stochastic compartmental models based on a Master equation (see Simple contagion). See Epidemics on networks, for models that include the underlying network structure.

Types of contagions

For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses)

For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics

See also wiki page: Complex contagion

Epistemology

guillefix 8th July 2016 at 2:26am

The theory of Knowledge.

Introduction to Epistemology


What is the nature of knowledge?

What are the obstacles to the attainment of knowledge?

What can be known?

How does knowledge differ from opinion or belief?

Equilibrium statistical physics

guillefix 29th January 2016 at 12:48am

Statistical Mechanics Lecture notes (Oxford Maths)

Statistical Mechanics Lecture notes (Oxford Physics)

Ergodic hypothesis

Equilibrium ensembles

Fundamental postulate

Can formulate as:

  • Maximum entropy. Can be formulated as minimum of a Thermodynamic Potential, depending on constraints imposed.
  • Equal a-priori probabilities principle

Partition function

Thermodynamics

Laws

1st law

2nd law

3rd law

Thermodynamic potentials

Applications

Equivalence relation

guillefix 14th July 2016 at 1:06am

An equivalence relation is a binary Relation, RR on a Set XX that satisfies:

Ergodic theory

guillefix 7th July 2016 at 7:03pm

Error-correcting code

guillefix 3rd July 2016 at 5:02am

See Coding theory

Forward error correction

Forward error correction: forward error correction (FEC) or channel coding[1] is a technique used for controlling errors in data transmission over unreliable or noisy communication channels, where the information flows only one way (see here).. The central idea is the sender encodes the message in a redundant way by using an error-correcting code (ECC). The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.[2]

Two way error correction. Things like ARQ (automatic repeat request).

Main types of FEC codes:

See http://pfister.ee.duke.edu/thesis/chap1.pdf, and other chapters.

(IC 1.3) Applications of Error-correcting codes

Quantum error correction

Ethereum

guillefix 21st May 2016 at 10:53pm

Ethics

guillefix 11th May 2016 at 7:17pm

See discussion at Emotion

Ethnic and cultural studies

guillefix 8th April 2016 at 5:51pm

Ethology

guillefix 5th July 2016 at 3:53am

Ethology. Study of animal behaviour

Eukaryotes

guillefix 8th July 2016 at 1:03am

One of the most studied model organisms is the Saccharomyces cerevisiae

Types of eukaryotes

Evolution

guillefix 21st July 2016 at 3:12pm

Evolution (wiki) is a positive feedback loop: it's all about changes that perpetuate those changes. Whenever you change a gene in such a way that it makes that gene more likely to stick around. But it doesn't need to be a gene. You can make self-sustaining cultural changes, like memes, self-fullfilling prophecies. Of course, positive feedback loops are found in maaany places, and they are indeed one of the main causes of self-organization in complex systems, so it is nice to see that evolution is just an example of one.

Dawkins idea of replicators (see his article) of course fits well, because replicators are just self-sustaining structures. See also Units of evolution: A metaphysical essay and this, and The Elementary Units of Heredity cited in his article)

See MMathPhys oral presentation, Evolutionary computing, Genetics

Evolutionary biology

Read book: Dawkins - The extended phenotype

Evolutionary Dynamics- Exploring the Equations of Life - by Martin A. Nowak slides website Evolutionary dynamics on graphs djvu

Modern evolutionary synthesis

History of evolutionary thought

Evolution theory

Theoretical evolutionary genetics - Felsenstein (book), pdf

ON THE FORMALIZATION OF THE EVOLVING TRANSFORMATION SYSTEM MODEL

Evolutionary Theory and Mathematics

Mathematical Modeling of Evolution

“The arrival of the fittest”: Toward a theory of biological organization

Neutral theory of evolution

Kimura's neutral theory of evolution. He proposed that (at least for molecular evolution) most mutations are neutral, meaning that they don't lead to a change in fitness.

Evolutionary developmental biology

Features of evolving systems

Effects in evolution

Bias in GP maps, Arrival of the frequent

Genetic information and evolution

See Genetics

Genotype-phenotype map, Bias in GP maps

Discovery of a fundamental limit to the evolution of the genetic code

Scientists discover the evolutionary link between protein structure and function

Evolutionary computing

Evolution in Complex systems

Bias in GP maps


Some older disorganized thoughts:

Replicators at different levels.

Multilevel selection may not be necessary. However, it may be useful, it is just different ways of looking at evolution at different levels, depending on which processes are most important: mostly which (approximate) replicators are being looked at

Group, kin, individual, gene etc selections are just different proximate/ultimate levels of causation on the same evolutionary process


People

https://en.wikipedia.org/wiki/Ernst_Mayr

See in wiki article of evolution

Statistical Physics of Adaptation

Evolutionary computing

guillefix 30th June 2016 at 1:38am

https://en.wikipedia.org/wiki/Evolutionary_computation

See Evolution

Computational intelligence - Scholarpedia

Evolution of evolvability Slides

Complexity compression and evolution

Genetic programming

https://en.wikipedia.org/wiki/Genetic_programming

https://en.wikipedia.org/wiki/Gene_expression_programming

presentation

See Holland's work. For e.g.

Holland, J. H. (1992). Adaptation in Natural and Artificial Systems, MIT Press, Cambridge MA.

Three Elements of a Theory of Representations

Redundant Representations in Evolutionary Computation As a result, uniformly redundant representations do not change the behavior of GAs. Only by increasing r, which means overrepresenting the optimal solution, does GA performance increase. Therefore, non-uniformly redundant representations can only be used advantageously if a-priori information exists regarding the optimal solution.

Bias towards simplicity (see MMathPhys oral presentation) similar to regularization in Machine learning?

Evolvable harware

https://en.wikipedia.org/wiki/Evolvable_hardware

Whatever happened to evolvable hardware?

https://en.wikipedia.org/wiki/Reconfigurable_computing

Automated Antenna Design with Evolutionary Algorithms

Artifical life

http://www.framsticks.com/

Logos sotware from MIT for agent-based simulation and others

Nils Aall Barricelli

Conway's game of life

http://www.scholarpedia.org/article/Game_of_Life

Automata theory, cellular automata.

Smooth cellular automata: https://www.youtube.com/watch?v=KJe9H6qS82I

Life in life, meta

ASCII fluid dynamics

Benefits of Sexual Reproduction in Evolutionary Computation

Evolutionary developmental biology

guillefix 23rd June 2016 at 10:16pm

Evolving automata

guillefix 22nd July 2016 at 6:05pm

See MMathPhys oral presentation. [[Automata theory


http://link.springer.com/chapter/10.1007/978-3-642-23780-5_20#page-1

http://www.sciencedirect.com/science/article/pii/S0031320305000294

http://www.mitpressjournals.org/doi/abs/10.1162/neco.1992.4.3.393#.V5JBI-02fCI

http://www.mitpressjournals.org/doi/abs/10.1162/neco.1989.1.3.372#.V5JBB-02fCI

https://scholar.google.com/scholar?start=20&q=learning+finite+state+transducer+complexity&hl=en&as_sdt=0,5


Simplicity bias in finite-state transducers

Random deterministic automata

Evolving Finite State Machines with Embedded Genetic Programming for Automatic Target Detection

Learning Finite-State Transducers: EvolutionVersus Heuristic State Merging

Boolean network and their evolution (What Darwin didn't know: natural variation is structured).

Introducing Domain and Typing Bias in Automata Inference

Random Deterministic Automata

An Automaton Approach for Waiting Times in DNA Evolution

Also, genetic regulatory networks: Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output, Evolvability and robustness in a complex signalling circuit

Entropy of a Finite State Transducer

H=β,α,yPβPαpα,β(y)logpα,β(y)=β,α,yPβPα(xF1(y)pα(x))log(xF1(y)pα(x))H = - \sum_{\beta, \alpha, y} P_{\beta}P_{\alpha} p_{\alpha, \beta}(y)\log{p_{\alpha, \beta}(y)} = - \sum_{\beta, \alpha, y}P_{\beta}P_{\alpha} (\sum_{x\in F^{-1}(y)} p_{\alpha}(x))\log{(\sum_{x\in F^{-1}(y)} p_{\alpha}(x))}

Ergodicity of Random Walks on Random DFA

On the Effect of Topology on Learning andGeneralization in Random Automata Networks

Quantifying the complexity of random Boolean networks

The state complexity of random DFAs

http://tuvalu.santafe.edu/~walter/AlChemy/alchemy.html Artificial chemistry

Comparing nondeterministic and quasideterministic finite-statetransducers built from morphological dictionaries


Is a random transducer an appropriate random model for GP maps in Nature?

For instance, in Gene regulatory networks, when modelled as random Boolean networks, the state transition network is probably not just a random transducer... Though maybe it depends in the regime. For instance, in the critical regime we observe the largest GP map bias apparently

Examples of finite-state transducers and their simplicity bias

guillefix 12th July 2016 at 12:49am

See Simplicity bias in finite-state transducers

You need to be able to loop around the non-coding region, and around the coding region to get non-trivial designability/complexity plots.

This FST shows a good example of an approximately absorbing region with two non-coding states. The fact that the region is approximately non-absorbing, and as there is a cycle outside that region, means we will get variety in output.

FST table:

0 2 1 1 0 1 0 1 1 4 1 0 1 3 0 0 2 1 1 1 2 0 0 0 3 2 1 0 3 1 0 0 4 1 1 0 4 1 0 0 0 1 2 3 4

In this example there is clear bias towards a 000000...000000... sequence, as there is an absorbing region made entirely of 00-noncoding states. However, the rest of the fst does not have any loop, so there's barely any possibility for variety of outpus, and the designability/complexity plot is trivial.

Here is an example of a FST with an approximately absorbing region with non-coding states that is the whole fst.

Examples of GP map bias

guillefix 18th May 2016 at 1:15am

Explosive percolation

guillefix 13th June 2016 at 7:39pm

Percolation processes that show a discontinuous, or at least very steep phase transition. See this image for a nice sumary of types of explosive percolation processes. The reviews below also summarize results, and below we discuss some of the main types.

Explosive Percolation: Novel critical and supercritical phenomena

Impact of single links in competitive percolation

Achlioptas processes

Achlioptas processes follow mm-edge rules which involve choosing mm candidate edges uniformly at random between any pair of nodes (compare with other Spanning cluster-avoiding process)and applying a rule to select which one is actually chosen. These have been proven to be continuous in the thermodynamic limit, for a fixed mm

k-vertex rule percolation process

Processes based on chosing kk vertices at random and adding edges among those vertices according to some rule. kk-vertex rules are actually a generalization of mm-edge rules.

Half-restricted processes

Half-restricted process is a variant of the Erdős–Rényi process which exhibits a discontinuous phase transition.

Explosive Percolation in Erdős-Rényi-Like Random Graph Processes

In each step, two vertices are connected by an edge, but one of them is restricted to be within the smaller components (more specifically defined to be a set composed of a given fraction, ff, of the total nodes chosen in ascending order of {the size of the component they belong to}. This is also called the restricted vertex set, Rf(G)R_f(G)). note that the restricted vertex set is recalculated after every step, as the clusters have changed.

This process exhibits a discontinuous percolation transition for any f<1f < 1

Spanning cluster-avoiding process

An spanning cluster-avoiding process (SCA) is an Explosive percolation model based on classifying bonds between those that facilitate the creation of the spanning-cluster, and those that don't, and preferentially selecting those that don't. They are similar to Achlioptas processes (mm-edge processes). However, they don't require the candidate edges to be chosen at random between any pair of nodes, and instead the candidate edges can belong to a predetermined underlying network, common a hypercubic lattice. They are capable of showing discontinuous transitions, for certain choices of the number of candidate edges chosen per step

I think there should be a term used for mm-edge-like processes, that have an underlying network..

Applications of explosive percolation models

Extensions of preferential attachment models (Network theory)

guillefix 24th February 2016 at 12:42am

See Models of network formation

  • Edges (like hyperlinks) may also disappear. They may also appear at times after the nodes are added.
  • Nodes may also disappear (like websites).
  • Preferential attachment could be non-linear on degree, or it could depend on other network property of the node.
  • Nodes may have some intrinsic fitness too.

Models with extra edge addition

Model can consist of the BA model, but with an extra process carried out at each step. A given number of edges ww is added to the network between two nodes with a probability proportional to their degree. One can again construct a master equation, and get a power law degree distribution.

Similar models exist that generalize the Price's model instead of BA.

Edge removal

Simple model: at each update step we remove vv edges chosen uniformly at random from set of all edges. The probability that node ii looses an edge connected to it, for each of these removals, is 2ki/iki2k_i/\sum_i k_i. This is because randomly choosing an edge means randomly choosing a pair of stubs, and ii will loose an edge when either of these randomly chosen stubs coincides with one of the kik_i stubs incident to ii. The probability of this happening for each of the randomly chosen stubs is ki/ikik_i/\sum_i k_i, and the probability that either stubs is from ii is 2ki/ikiprobability both ends on same edge2k_i/\sum_i k_i-\text{probability both ends on same edge}. However the probability both ends on same edge\text{probability both ends on same edge} is 00 because the BA network formation model doesn't allow self-edges to form. Therefore we are left with 2ki/iki2k_i/\sum_i k_i, as in Newman's book.

Models with edge addition and removal

One can also combine the two models above. The master equation in this case, becomes more complicated, because pkp_k now depends on both pk+1p_{k+1} and pk1p_{k-1}. Generating function methods need then to be used. Se Newman section 14.4.2 or the paper Exact solutions for models of evolving networks with addition and deletion of nodes for detailed calculation, and a power law degree distribution is still obtained (though with different exponent of course), as long as edge removal rate is not too high.

One can also do analogous for removal and addition of nodes.

Non-linear preferential attachment

Attachment probability may depend nonlinearly on degree, i.e. we have a nonlinear attachment kernel.

One can still derive an asymptotic form of the degree distribution for the case akkγa_k \propto k^\gamma, of interest because empirical networks have shown this form of preferential attachment. For 1/2<γ<11/2 <\gamma <1, the degree distribution is no longer a power law, but an "stretched exponential" of the form:

pkkγexp(μk1γc(1γ))p_k \sim k^{-\gamma} \exp{(-\frac{\mu k^{1-\gamma}}{c(1-\gamma)})}

This function decays slower than exponential because 1γ<11-\gamma <1. There are also similar but more complicated expressions for other γ\gamma in the range (0,1)(0,1).

One can also calculate the case for superlinear preferential attachment with γ>1\gamma >1. In this case it turns out that the behaviour is that a "leader" emerges in the network, gaining a non-zero fraction of all edges, asymptotically, with the rest having degree less than some fixed constant. See here for more.

Nodes with inherent fitness

Inherent fitness aka attractiveness may vary across nodes in the network.

See Bose-Einstein condensation in complex networks and Competition and multiscaling in evolving networks for a model. In it a fitness value, ηi\eta_i, is assigned to each node (sampled from a given distribution ρ(η)\rho(\eta)), and is unchanged thereafter. Now, the attachment kernel depends on η\eta as well: ak(η)a_k(\eta). The same method used as for the section Degree distribution as a function of time of creation above can be used (with η\eta instead of tt), and a solution can be analytically obtained for the case ak(η)=ηka_k(\eta) = \eta k, and a power law distribution is obtained for each η\eta, but not overall, as it depends on what ρ(η)\rho(\eta) is.

In Bose-Einstein condensation in complex networks, they show an interesting effects that happens for some choices of ρ(η)\rho(\eta), where a few nodes (a constant number of them, so as a fraction, they go as 1/n1/n and go to 0 as nn \rightarrow \infty and so don't appear in pkp_k) have a degree proportional to nn, and so do contribute to quantities like k\langle k \rangle. This is analogous to Bose condensation. However, it is not known which ρ(η)\rho(\eta) will produce condensation, and computer simulations suggest, that whether condensation occurs or not may depend on the fluctuations and thus not be deterministic (see Polya's run; is this at all related to Ross–Littlewood paradox?

There are also interesting work on the statistics of the node with maximum fitness (which changes more and more rarely as a higher value of η\eta is sampled at some updates). These follow so-called record dynamics Slow dynamics from noise adaptation.

More relevant review articles:

Statistical mechanics of complex networks

Complex networks: Structure and dynamics

Evolution of networks

Fano's inequality

guillefix 5th July 2016 at 12:58pm

Features of evolving systems

guillefix 23rd June 2016 at 10:14pm

Some features that are important in the behaviour of an evolving system.

Important features:

The Evolution of Evolvability - Dawkins

Evolution of evolvability

Fibrous material

guillefix 9th May 2016 at 8:36pm

A fibrous material is any material system formed by fiber-like constituents such as felt, cloth, paper, muscle and wood.

On uniqueness of fibrous materials

Fibrous Materials

Filter (signal processing)

guillefix 1st July 2016 at 5:07pm

Filter (Topology)

guillefix 14th July 2016 at 3:22am

A filter F\mathcal{F} on XX is a family of subsets of XX such that:

(a) F\emptyset \notin \mathcal{F};

(b) F\mathcal{F} is algebraically closed under finite intersections;

(c) F\mathcal{F} is an upper family.

An upper family refers to a family of subsets, which is an Upper set w.r.t. the Lattice of subsets of XX, that is if a set is in the family, then any subset of that set is also in the family.

See also Filter base

Filter base

guillefix 14th July 2016 at 2:16pm

A filter base B\mathcal{B} is a family of non-empty subsets of a Set XX such that if A,BBA, B \in \mathcal{B} then there exists CBC \in \mathcal{B} such that CABC \in A \cap B.

This can be used to construct a Filter (Topology):

This notion can also be extended so that a family of filter bases (which we call a base, or a basis) generates the filters forming the Neighbourhood structure of a Neighbourhood space, or of a Topological space. For a topological space, the arbitrary unions of set in the filter base can be considered to generate the open sets

A filter base can in turn be generated by a Filter subbase

Filter subbase

guillefix 14th July 2016 at 2:03pm

A filter subbase can generate a Filter base, and like it, it can be extended so that a family of filter subbases (which we call subbase) to generate a whole Topological space.

Note that the sets forming the subbase are part of the base they generate, because finite intersections include the intersection of a set with itself.

Finite state channel

guillefix 5th July 2016 at 6:00pm

In Information theory, and in particular, Data transmission, a finite state channel (FSC) is a discrete-time channel where the distribution of the channel output depends on both the channel input and the underlying channel state. This allows the channel output to depend implicitly on previous inputs and outputs via the channel state.

The channel can be modelled as a stochastic Finite-state transducer.

See here for more: http://pfister.ee.duke.edu/thesis/chap4.pdf

Entropy and Mutual Information for Markov Channels with General Inputs

Blackwell, Breiman, and Thomasian introduced indecomposable FSC (IFSC) in [7] and proved the natural analogue of the channel coding theorem for them. Birch discusses the achievable information rates of IFSCs in [5], and computes bounds for a few simple examples.

Examples of finite state channels

  • Discrete-time Linear filter channels with AWGN
  • Dicode erasure channel
  • Finite state Z-channel

Trellis diagrams

Similar to the mapping between Boolean lattices and directed percolation. See Relations between the stability of Boolean networks and percolation.

Definitions and basic properties of FSCs

See Markov chain

In the thesis he considers a Markov input process as Information source

Combining the Markov Input Process and the Finite State Channel, gives a new Markov process over the states given by the cross product of the states of the channel and of the input. They label this new set of states by integers too. This combined process is what they call a Finite state process (FSP).

Entropy rate of a finite state process

See Random matrix product

Capacity of finite state markov channels with general inputs

A Randomized Approach to the Capacity of Finite-State Channels

Capacity, mutual information, and coding for finite-state Markov channels

Finite-state machine

guillefix 15th July 2016 at 9:30pm

See Automata theory

Finite number of states, transitions between them are followed according to sequentially read (a.k.a. on-line) input string.

Formally, a finite automaton on an alphabet AA is a tuple (Q,I,F,E)(Q,I,F,E), where QQ is the set of states, the subsets of QQ, II and FF, which are the set of input and final states, respectively. EQ×A×QE \subset Q \times A \times Q is the set of edges between states, labelled by a letter in the alphabet. The transition encoded by the edge is performed, when the automaton reads the letter, while being at the first state in the transition.

Deterministic finite automaton

Deterministic machine Reversing deterministic machines

Non-deterministic finite automaton

Non-determinstic finite state machines can have more than one transition that may be done when reading a certain input symbol on a state. They may also have epsilon transitions which can be done without reading a symbol. A string is accepted if there is at least one path through the machine that ends in an accepting state

Determinsitic equivalent in power to non-determinstic. But non-deterministic sometimes much easier to think with.

Convert non-determinsitic machine to deterministic machine

Equivalence between non-deterministic and deterministic machines is the key in proving that regular sets are closed under reversal.

To construct a FSM that accepts the complement of a regular set, just swap accepting and non-accepting states.

Closure under union key point

Closure under intersection

Why are regular sets called regular? he uses a nice heuristic explanation of the pumping lemma

See also Finite-state transducer


Build an fst on the web: http://madebyevan.com/fsm/

Finite-state transducer

guillefix 15th July 2016 at 2:29pm

Firmware

guillefix 31st January 2016 at 8:29pm

first_passage_path.jpg

First-passage time

guillefix 4th May 2016 at 11:56pm

First-passage time

Can also calculate using survival probabilities. See notes!!

See Backwards Fokker-Planck equation

Fisher information matrix

guillefix 25th June 2016 at 3:24pm

The Fisher information matrix (FIM) is the Hessian of the log\log Likelihood function.

If one Taylor expands the log-likelihood around a maximum, and keeps only terms up to second-order, we are approximating the peak by a Gaussian peak, and this is what is done to find the FIM

Intro to Fisher Matrices

The Covariance matrix is the inverse of the Fisher matrix.

χ2\chi^2 can be calculated as χ2=δFδT\chi^2 = \delta F \delta^T, where FF is the FIM, and δ\delta is a small step in parameter space from the maximum of the likelihood.

Fishing

guillefix 27th February 2016 at 3:00pm

Fishing is the capture of fish, and all the related art and science

Under-ice fishing

fixed_points_classification1.png

guillefix 16th February 2016 at 11:47am

fixed_points_classification2.png

guillefix 16th February 2016 at 11:48am

Fixed-point iteration

guillefix 27th April 2016 at 9:26pm

faster if expansion sequence is unknown (i.e. we don't know it it's a power series or a log series for instance); slower, if the expansion sequence is known.

For example to find roots of an equation we need to express it as:

x=g(x;ϵ)x^* = g(x^*; \epsilon)

where xx^* is the solution we're looking for. Then starting from a guess x0x_0 (which if possible should be chosen to be the solution for ϵ=0\epsilon =0, so that the solution is right to order 1 at least.), then we iterate:

xn+1=g(xn;ϵ)x_{n+1} = g(x_n; \epsilon)

and the iterations should get better if g(x;ϵ)<1|g'(x^*; \epsilon)| <1 (prime = derivative), and x0x_0 is suitably chosen. However, to get asymptotic expansion we actually require g(x;ϵ)0g'(x^*; \epsilon) \rightarrow 0 as ϵ0\epsilon \rightarrow 0. In particular, if g(x;ϵ)=o(ϵ)g'(x^*; \epsilon) = o(\epsilon), one gets one term in a power-series expansion, per iteration, as can be seen from argument in notes, where we see that the difference between true answer and answer gets multiplied by g(x;ϵ)g'(x^*; \epsilon) at every iteration. If we don't know the order of g(x;ϵ)g'(x^*; \epsilon), the way to check if the iteration is right up to some order is to try one more iteration and seeing if the term changes (Though I don't think that's definite proof).

The usual procedure is to place the dominant term of the equation on the xn+1x_{n+1} side (i.e., the side that will give the new value), so that it can be calculated as a function of the terms on the xnx_n side (i.e., the previously-obtained value). As we will see later, the identity of the dominant term can be adjusted by scaling. I think we place the dominant term of the equation on the xn+1x_{n+1} side because that ensures we choose that term to be right to first order in the 0th iteration, and so the equation is right to first order. In the simple example of x=±1ϵxx = \pm \sqrt{1-\epsilon x}, which comes from x2+ϵx1=0x^2+\epsilon x -1 =0, we selected the x2x^2 term, if we had selected the ϵx\epsilon x, we would have to divide by ϵ\epsilon and the ϵ=0\epsilon =0 case would not be well defined, indicating that we want to get the dominant term right in the equation. Another way to look at it, is dominant balance, by putting the dominant term on the LHS, the x=g(x;ϵ)x^* = g(x^*; \epsilon) approximately expresses dominant balance!

For the iterative method, different functions gg may be needed to find different perturbed roots of an algebraic equation, so that condition g(x;ϵ)0g'(x^*; \epsilon) \rightarrow 0 as ϵ0\epsilon \rightarrow 0 is satisfied.

The proof that this method works is based on a Fixed-point theorem, in particular on the contraction mapping theorem, also used proof Fractals are well defined.

See more at Fixed-point iteration

If |gradient|<1, iteration doesn't converge:

Fixture

guillefix 5th July 2016 at 4:26am

A piece of equipment or furniture that is fixed in position in a building or vehicle.

flashing ratchet.png

guillefix 22nd January 2016 at 7:17pm

Fluid dynamics

guillefix 2nd May 2016 at 2:18pm

Fluid dynamics is the branch of Fluid mechanics that describes the causes of motion, i.e. the forces and torques that can affect fluids, and how these affect their motion.

The equations of fluid dynamics can be derived from the principles of Mechanics (in particular continuum mechanics). More recently they have also been derived from the microscopic statistical picture of moving and interacting particles thanks to the development of Kinetic theory.

Navier-Stokes equation

https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations

Oxford course. Bachelor's book, etc.

https://www.youtube.com/watch?v=pqWwHxn6LNo&list=PL0EC6527BE871ABA3&index=2

https://en.wikipedia.org/wiki/Strain_rate_tensor

See table I made for 3rd year revision

Convection-diffusion

of heat, particles, etc.

https://en.wikipedia.org/wiki/Convection%E2%80%93diffusion_equation

http://www.cfm.brown.edu/people/gk/chap9/node1.html

Fluid kinematics

guillefix 22nd June 2016 at 3:18am

Fluid kinematics is the branch of Fluid mechanics that (just like kinematics, in Mechanics) describes the possible motion of fluids.

Flow can be decomposed into:

  • translation
  • rotation
  • strain

Pure shear is a combination of rotation and strain.

Kevin's circulation theorem

Fluid mechanics

guillefix 2nd May 2016 at 1:52pm

The branch of Mechanics that deals with the motion and the forces that affect fluids

A fluid is a piece of matter that has no, or negligible, elasticity. This means it flows under virtually any applied force.

There are three main phases of matter that are fluid:

  • Liquid. Fully homogeneous/isotropic condensed phase composed of atoms.
  • Gas. Fully homogeneous/isotropic non-condensed phase composed of atoms.
  • Plasma. Fully homogeneous/isotropic non-condensed phase composed of ionized atoms and/or other charged particles. See Plasma physics.

More complex fluid phases, often composed of mixtures are called complex fluids.

Fluid dynamics describes the causes of motion, i.e. the forces and torques that can affect fluids, and how these affect their motion. Magnetohydrodynamics and Electrohydrodynamics describe the dynamics of an electrically conductive fluid.

Fluid kinematics (just like kinematics, in Mechanics) describes the possible motion of fluids.

Foam

guillefix 9th May 2016 at 8:16pm

Foam is a substance that is formed by trapping pockets of gas in a liquid or solid, where the concentration of the gas phase is high (i.e. it occupies the majority of the volume).

Fokker_Planck.png

guillefix 20th January 2016 at 11:56pm

Fokker-Planck equation

guillefix 24th May 2016 at 1:21am

Deriving FP eq from Langevin equation. Fokker-Planck equation works for Markov processes in space, so it is derived from the Langevin equation that ignores inertia.

tP(r,t)+[v(r)P(r,t)DP(r,t)]\partial_t P(\vec{r},t)+\vec{\nabla}\cdot[\vec{v}(\vec{r})P(\vec{r},t)-D\vec{\nabla}P(\vec{r},t)]

where:

Detailed balance and equilibrium

Setting tP(r,t)=0\partial_t P(\vec{r},t) = 0 and J=0\vec{J}=0, and using Einstein's relation, we get Boltzmann Distribution.

N-non-interacting particles

We get Smoluchowski equation.

N interacting particles

We get BBGKY hierarchy, as in Kinetic theory

Backwards Fokker-Planck equation Tells you how likely different initial conditions is to arrive at a certain fixed point in the future.

Applications of Fokker-Planck equation

First-passage time Calculation of the mean time required to leave a region.

Kramers rate theory The rate at which fluctuations push particles over a barrier.

Survival probability Crucial argument: reflecting parts of the trajectory leaves same probability See also here for nice derivation from boundary conditions

Stationary solution of 1D FP equation

Brownian ratchets

Assume a periodic potential U(x)U(x) with a bias:

V(x)=U(x)FxV(x)=U(x)-Fx

and assume the solution is periodic:

P(x+L)=P(x)P(x+L)=P(x)

This is not the equilibrium solution (which would be an exponential growing P to compensate bias, just as the exponential growth of density in gravity or constant electric field). Therefore J0J \neq 0 even though it is stationary tP=0\partial_t P=0. If we integrate this from 00 to LL taking this periodicity into account:

The easiest way to calculate escape time from one well to the next is to assume there is one particle per well:

The average drift velocity is vdriftLTesc=JLv_{drift} \equiv \frac{L}{T_{esc}}=JL.

Fluctuation-driver transport

Analogous to AC rectification in diodes!

Mathematical properties of FP eq

Quantum mechanical analogy

See video, and the lecture notes!

Also applicable in Path integrals for stochastic processes

Stochastic quantization and path integral formulation of Fokker-Planck equation

Food

guillefix 20th June 2016 at 11:03pm

Formal grammar

guillefix 29th June 2016 at 2:29am

Formal language

guillefix 29th June 2016 at 7:38pm

Relations to compilers, parsers, etc. Grammars, etc.

A nice new language for this: Ohm

See Automata theory, GKeep notes.

Chomsky hierarchy. (see also Theory of computation).

Mathematics - Formal Languages and Automata Theory

Formal system

guillefix 24th April 2016 at 7:34pm

Formal systems and semantics

guillefix 23rd June 2016 at 11:19pm

Languages, grammars, etc.

Chomsky hierarchy

See Theory of computation

(Abstract) Rewrite systems

A set of objects, and a binary operation, \rightarrow, that tells us how we are allowed to transforms expressions. If these rules act on terms out of which an expression can be built, then this is a term rewrite system.

They are non-deterministic Markov algorithms, and they are Turing complete. They are related to normal forms, lambda calculus, and combinatory logic

http://www.cs.tau.ac.il/~nachum/papers/survey-draft.pdf

Formal system

CM20019—Computation III: Formal Logic and Semantics

Forward osmosis

guillefix 2nd July 2016 at 3:15pm

Fractal

guillefix 15th July 2016 at 9:36pm

Friction

guillefix 13th July 2016 at 3:30pm

frog on unicycle

guillefix 8th May 2016 at 11:55pm

Oh shit waddup

Frontend web development

guillefix 9th July 2016 at 3:34pm

Frontend web development

JavaScript

https://medium.freecodecamp.com/angular-2-versus-react-there-will-be-blood-66595faafd51#.4bc9n0ott ReactJS seems better

CSS libraries

See Voxel.css for Minecraft-like stuff in browser


Graphics and visualization web libraries

Graphics and visualization

~ ~ ~

Maths frontend web libraries

http://fortawesome.github.io/Font-Awesome/

Webgl See chromeexperiments website

Nice 2D webgl lib: http://www.pixijs.com/

voxel.css

http://codepen.io/sha99y8oy/pen/GZZXyL

http://www.effectgames.com/demos/canvascycle/

HTML presentations: impress.js, reveal.js, deck.js

Audio libraries

See here: https://musiclab.chromeexperiments.com/Technology

Tone.js

For microphone input: https://en.wikipedia.org/wiki/WebRTC

Input libraries

For accelerometer, gyroscope input (from phone for eg) see chromeexperiments

For microphone input: https://en.wikipedia.org/wiki/WebRTC

CMS

Jekyll

Function

guillefix 7th July 2016 at 6:39pm

A type of Relation between two Sets, such that for each element belonging to the set called domain, there is a unique element belonging to the set called co-domain.

Types of function


Preimage

Functional analysis

guillefix 15th July 2016 at 9:39pm

Functional calculus

guillefix 26th January 2016 at 7:11pm

Functional derivative

Functional integration

guillefix 26th January 2016 at 7:10pm

John Klauder - Lectures on Functional Integration

Some Recommended Books

G. Roepstorff, "Path Integral Approach to Quantum Physics", Springer-Verlag, Berlin, 1996

R. Feynman and A. Hibbs, "Quantum Mechanics and Path Integrals", McGraw-Hill, New York, 1965

A.V. Skorokhod, "Studies in the Theory of Random Processes", Addison-Wesley Publishing, Reading, Massachusetts, 1965

B. Simon, "Functional Integration and Quantum Physics", Academic Poress, New York, 1979

L. Schulman, "Techniques and Applications of Path Integration", John Wiley & Sons, New York, 1981

J. Klauder and B-S. Skagerstam, "Coherent States", World Scientific, Singapore, 1985

C. Grosche and F. Steiner, "Handbook of Feynman Path Integrals", Springer-Verlag, Berlin, 1998

J. Klauder, "Beyond Conventional Quantization", Cambridge University Press, Cambridge, 2000

H. Kleinert, "Path Integrals in Quantum Mehcanics, Statistics, and Polymer Physics", 3rd Edition, World Scientific, Singapore, 2003

Functional programming

guillefix 28th June 2016 at 4:45am

Introduction to Functional Programming youtube videos


Functional programming in Javascript

  • Higher-order functions: functions that take functions as arguments. Functions are like variables too. Examples:
  • Functors. Objects that hold collections of objects (like arrays) that have the map() function so that one can map these collections to other collections of the same size. One can made the analogy more precise between these and functors in Category theory which is very related to functional programming ideas.
  • Monoids.

Functional programming languages

Clojure

The syntax is so nice. As he says in the vid, there is basically no syntax. It also reminds me of the data structures used for CASs

Lisp

Scala. yt vids

Scheme

Haskell. http://learnyouahaskell.com/

Functional programming on JavaScript

guillefix 27th June 2016 at 10:43pm

Furniture

guillefix 1st July 2016 at 11:44pm

Future of Humanity

guillefix 6th February 2016 at 1:20am

Galactic astronomy

guillefix 5th July 2016 at 3:24am

Galaxy

guillefix 5th July 2016 at 3:28am

A cluster of Stars

Galaxy group

guillefix 5th July 2016 at 3:28am

A group of Galaxies

Galaxy supercluster

guillefix 5th July 2016 at 3:27am

Game

guillefix 13th June 2016 at 7:56pm

Wikipedia:Portal/Directory/Sports and games

Sport

https://en.wikipedia.org/wiki/Game#Definitions

Computer game designer Chris Crawford, founder of The Journal of Computer Game Design, has attempted to define the term game[8] using a series of dichotomies:

Creative expression is art if made for its own beauty, and entertainment if made for money. A piece of entertainment is a plaything if it is interactive. Movies and books are cited as examples of non-interactive entertainment. If no goals are associated with a plaything, it is a toy. (Crawford notes that by his definition, (a) a toy can become a game element if the player makes up rules, and (b) The Sims and SimCity are toys, not games.) If it has goals, a plaything is a challenge. If a challenge has no "active agent against whom you compete," it is a puzzle; if there is one, it is a conflict. (Crawford admits that this is a subjective test. Video games with noticeably algorithmic artificial intelligence can be played as puzzles; these include the patterns used to evade ghosts in Pac-Man.) Finally, if the player can only outperform the opponent, but not attack them to interfere with their performance, the conflict is a competition. (Competitions include racing and figure skating.) However, if attacks are allowed, then the conflict qualifies as a game.

Mathematical study of games

Game theory

Combinatorial game theory

Game development

guillefix 3rd July 2016 at 5:16am

In particular, video games, and computer games.. But generally, any Games

https://www.unrealengine.com/what-is-unreal-engine-4

https://goocreate.com/

Unity 5

Minecraft

Mods

Applied Energistics

Quantum one made by MIT

See Voxel.css for Minecraft-like stuff in browser. See Iconic maths ideas in Concrete mathematics in particular ones using cubes.

Game theory

guillefix 13th June 2016 at 7:52pm

Gastronomy

guillefix 27th February 2016 at 2:56pm

Gels

guillefix 11th June 2016 at 2:08pm

Gel:

Nonfluid colloidal network or polymer network that is expanded throughout its whole volume by a fluid.

A gel is thus a Porous solid with colloidal size pores, and filled with liquid. See also http://www.madsci.org/posts/archives/2001-03/984500675.Ch.r.html

It is a substantially dilute cross-linked system, which exhibits no flow when in the steady-state. By weight, gels are mostly liquid, yet they behave like solids due to a three-dimensional cross-linked network within the liquid.

Note 1: A gel has a finite, usually rather small, yield stress.

Note 2: A gel can contain:

(i) a covalent polymer network, e.g., a network formed by crosslinking polymer chains or by nonlinear polymerization;

(ii) a polymer network formed through the physical aggregation of polymer chains, caused by hydrogen bonds, crystallization, helix formation, complexation, etc., that results in regions of local order acting as the network junction points. The resulting swollen network may be termed a “thermoreversible gel” if the regions of local order are thermally reversible;

(iii) a polymer network formed through glassy junction points, e.g., one based on block copolymers. If the junction points are thermally reversible glassy domains, the resulting swollen network may also be termed a thermoreversible gel; (iv) lamellar structures including mesophases {Ref.[4] defines lamellar crystal and mesophase}, e.g., soap gels, phospholipids, and clays;

(v) particulate disordered structures, e.g., a flocculent precipitate usually consisting of particles with large geometrical anisotropy, such as in V2O5 gels and globular or fibrillar protein gels. Note 3: Corrected from ref.,[5] where the definition is via the property identified in Note 1 (above) rather than of the structural characteristics that describe a gel.[6]

Hydrogel: Gel in which the swelling agent is water.

Note 1: The network component of a hydrogel is usually a polymer network.

Note 2: A hydrogel in which the network component is a colloidal network may be referred to as an aquagel.

Note 3: Definition quoted from refs.[6][7][8]

Theory of Gelation

https://www.youtube.com/user/rmaloneymsu/videos

https://www.youtube.com/watch?v=AjWkd0VIsa8

Gender and sexuality

guillefix 8th April 2016 at 5:55pm

Gene regulatory networks

guillefix 26th April 2016 at 9:26pm

Epigenetics..

GP map bias

I.e. designability

They show GP map bias.

Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output certain dynamical phenotypes can be generated by an atypically broad spectrum of network topologies. Such dynamical outputs are highly designable, much like certain protein structures can be designed by an unusually broad spectrum of sequences.

The network topologies that encode a highly designable dynamical phenotype possess two classes of connections:

  • a fully conserved core of dedicated connections that encodes the stable dynamical phenotype and
  • a partially conserved set of variable connections that controls the transient dynamical flow.

Evolvability and robustness in a complex signalling circuit The number of genotypes with a given phenotype varies very widely among these phenotypes. Some phenotypes have few associated genotypes. Others have many genotypes that form genotype networks extending far through genotype space. A minority of phenotypes accounts for the vast majority of genotypes. Importantly, we find that these phenotypes tend to have large genotype networks, greater robustness and a greater ability to produce novel phenotypes. Thus, over a broad range of phenotypic robustness, robustness facilitates phenotypic variability in our study system.

The effect of scale-free topology on the robustness and evolvability of genetic regulatory networks We find that SF networks generate oscillations much more easily than ER networks do, and this may explain why SF networks are more evolvable than ER networks are for oscillatory phenotypes.

Shape-dependent control of cell growth, differentiation, and apoptosis: switching between attractors in cell regulatory networks.

Models

Boolean network

See Dynamical systems on networks

General relativity

guillefix 2nd June 2016 at 1:30am

http://blog.stephenwolfram.com/2016/02/black-hole-tech/

EINSTEIN LECTURE SERIES

Warp drive

Astronomy

Gravity waves observed! : Observation of Gravitational Waves from a Binary Black Hole Merger

Notes from David Wallace's talk

Standard approach at theories:

Start with manifold and geometric objects

There are some absolute objects, and dynamical objects.

Spacetime symmetry group leaves absolute objects invariant. In GR, no absolute objects, so full diffeomorfism groups.


Alternative: G-structured space

Kleinian geometry: substractive construction.

vs Riemann geometry: additive construction


Check video of David wallace seminar 11/feb/2016

Generalized function

guillefix 26th January 2016 at 7:11pm

Generalized function, also called distribution.

http://www.damtp.cam.ac.uk/user/dbs26/1BMethods/Distributions.pdf

The are found as limiting cases of functions, where the limit itself is not a function, in the mathematical sense. However, they can be useful

"They’re designed to fulfill an apparently mutually contradictory pair of requirements: they are sufficiently well-behaved that they are infinitely differentiable and thus have a chance to satisfy partial differential equations, yet at the same time they can be arbitrarily singular – neither smooth, nor differentiable, nor continuous, nor even finite – if interpreted naively as 'ordinary functions'.

One defines distributions as linear maps from the space of test functions (smooth functions with compact support) to the Real numbers. One can add distributions, multiply distributions by smooth functions, but in general there is no way to multiply two distributions together.

The most important example of a distribution that isn't just a function is the Dirac delta

Genetic engineering

guillefix 8th April 2016 at 8:42pm

Gene editing with CRISPR/Cas9 Cas9 refers to a protein that has been found in bacteria inmune systems that is able to cut a DNA double strand at a point which matches the sequence of an RNA chimera (i.e. a molecule made of several RNA parts). This allows the programmable cutting of DNA. It is a particular type of a restriction enzime, which are enzymes which cut DNA at certain sites.

This is important for genetic engineering because it is known that when you cut DNA, one way DNA repairs is by rejoining the two ends of the cut by introducing a new piece of DNA.

Paper that announced discovery

Personal genome project, for "donating" your genome for research.

Cambrian Genomics DNA laser printing!

Gene therapy to save the world by Liz Parrish, CEO of BioViva. See Anti-ageing innovation.

Genetics

guillefix 20th April 2016 at 8:23pm

A gene is a particular portion of DNA in a chromosome that codes for a protein belonging to a certain family (that may then have some function in an organism or a cell). Every gene is identified with a particular protein, and viceversa (in standard biology).

A chromosome is a single molecule of DNA, containing many genes; an organisms often has several chromosomes. In a chromosome, the DNA is wound on histone proteins, and very densely packed, so that it can fit inside the nucleus. The packing structure is illustrated here.

A locus (see wiki) is the physical part (the location) along the DNA sequence of a chromosome, that a particular gene is found in.

An allele is a version of a gene coding for a specific protein. The genotype is the sequence of all alleles of an individual.

Genes, Alleles and Loci on Chromosomes

MIT notes

https://en.wikipedia.org/wiki/Zygosity

  • Heterozygote
  • Monozygote

Mendelian genetics

Mendelian genetics video

Mendel's laws

1. Law of segregation

2. Independent assortment

Heredity video

Punnet square

https://en.wikipedia.org/wiki/Chromosomal_crossover

Population genetics

Genetic engineering

Genotype-phenotype map

guillefix 21st July 2016 at 3:18pm

Map between a coding space (genotype), and another space, called the phenotype. These appear, for instance, in Evolution.

See MMathPhys oral presentation

Genotype–phenotype mapping and the end of the ‘genes as blueprint’ metaphor

Developmental encoding or indirect encoding: you encode the instructions to build the system (by Morphogenesis), instead of the system itself (direct encoding). See Neuroevolution: Direct and Indirect Encoding of Networks. Comparing direct and developmental encoding schemes in artificial evolution

Genotype-phenotype maps - Stadler Ideas extending standard topology to explore the spaces defined by GPMs

Evolving scalable and modular adaptive networks with Developmental Symbolic Encoding Ideas of evolvable GPMs, evolving evolvability, etc.

Effects

Bias in GP maps

Simplicity bias

Related concepts

Basin of attraction

Geography

guillefix 28th June 2016 at 4:09pm

Geological period

guillefix 8th July 2016 at 3:22am

A geological time span corresponding to tens to ~one hundred million years. See the timeline of the History of Earth

Geological time spans

guillefix 8th July 2016 at 3:22am
  • Eon. half a billion years or more
  • Era. several hundred million years
  • Period. tens to ~one hundred million years
  • Epoch. tens of millions of years
  • Age. millions of years
  • Chron. Smaller than an age (not common).

https://www.wikiwand.com/en/Period_(geology)

Geometry

guillefix 1st June 2016 at 7:09pm

Things often have a shape

What is space? Well, it can be Euclidean, but it may also be non-Euclidean, and have curvature!

Trigonometry

https://en.wikipedia.org/wiki/Geometry

New Horizons in Geometry (Dolciani Mathematical Expositions) 1st Edition

See part of the book here: http://www.mamikon.com/VisualCalc.pdf

Geophysical fluid dynamics

guillefix 10th July 2016 at 4:21am

Geophysics

guillefix 7th May 2016 at 6:19pm

Georges Méliès

guillefix 25th June 2016 at 4:12am

Georges Méliès. An important figure in the History of cinema

Ghost in the shell

guillefix 4th February 2016 at 9:47pm

Glucose

guillefix 8th July 2016 at 6:06pm

A 6-carbon ring.

GPU computing

guillefix 3rd April 2016 at 2:21pm

Grammar-based compression

guillefix 29th June 2016 at 7:44pm

Approximation algorithms for grammar-based data compression

smallest grammar problem: find the smallest context-free grammar that generates exactly one given string.

Grammar-based code

Grammatical codings

Granular material

guillefix 28th May 2016 at 3:00am

Graph

guillefix 13th July 2016 at 9:28pm

See Graph theory

A graph gg, consists of a set of vertices VV, and a set of edges EV×VE \in V \times V .

Graph automorphism

guillefix 14th July 2016 at 5:32pm

See Graph theory

A (graph) automorphism is an isomorphism from a graph to itself, i.e., where G=GG=G'.

Automorphisms capture the notion of symmetry for a graph because imposing the above condition of edge preserving is that same than imposing that if we move vertices in a geometrical representation of a graph from their positions to the positions previously occupied by other nodes while carrying their connections with them (because if connections exist between ii and jj, they must exist between f(i)f(i) and f(j)f(j), so that it is a homomorphism, i.e. a structure preserving map), then the new connections will be the same as those of the original graph (because the homomorphism property implies they are a subset of the connections. However, for it to be an isomorphism, the inverse map must also be a homomorphism, so that a connection (k,l)(k,l) must correspond to connections (f1(k),f1(l))(f^{-1}(k), f^{-1}(l)), so that it is also a superset, and so the sets of edges are equal).

->Another way of looking at a graph automorphism is as a permutation λ\lambda of the node labels VV, such that a pair of vertices (i,j)(i,j) are connected if and only if (λ(i),λ(j)(\lambda(i), \lambda(j) are connected.

->Yet another way of looking at graph automorphisms is, I think, as symmetries of the Adjacency matrix. Any permutation of the node labels that leaves the adjacency matrix unchanged is a graph automorphism.

The set of all automorphisms of an object forms a group, called the automorphism group. Intuitively, the size of the auto- morphism group A ( g ) provides a direct measure of the abundance of symmetries in a graph or network. Every graph has a trivial symmetry (the identity) that maps each vertex to itself.

Graph dynamical system

guillefix 8th July 2016 at 5:43pm

https://www.wikiwand.com/en/Graph_dynamical_system

A particular kind is a Boolean network, if the state of each node is binary.

See also Sequential dynamical system

Graph isomorphism

guillefix 13th July 2016 at 9:38pm

See Graph theory

A (graph) isomorphism is a mapping between vertices of two graphs G=(VG,EG)G=(V_G, E_G) and G=(VG,EG)G'=(V_G', E_G') (if(i)i \mapsto f(i) such that iVGi \in V_G and jVGj \in V_G') such that the edge (i,j)(i,j) is contained in the set of edges of GG, if and only if the edge (f(i),g(i))(f(i), g(i)) is contained in the set of edges of GG'. To graphs are isomorphic if there exists an isomorphism between them. They are then also called "topologically equivalent".

Graph laplacian

guillefix 30th January 2016 at 4:11pm

We can describe difussion of a quantity Ψi\Psi_i associated with node ii in a network with adjacency matrix AA, with the equation:

Ψit=jAijC(ΨjΨi)=Cj(Aijδijkj)Ψj)\frac{\partial \Psi_i}{\partial t}=\sum_j A_{ij}C(\Psi_j-\Psi_i)=C\sum_j (A_{ij}-\delta_{ij}k_j)\Psi_j)

where CC is the diffusion constant. In vector form:

Ψt=C(AD)ΨCLΨ\frac{\partial \mathbf{\Psi}}{\partial t}=C(\mathbf{A}-\mathbf{D})\mathbf{\Psi} \equiv -C\mathbf{L}\mathbf{\Psi}

where D=diag(k1,...,kn)\mathbf{D}=diag(k_1,...,k_n) is the diagonal matrix of degrees, and LL is the (combinatorial) graph laplacian, which is then:

Lij={kiif i=j,1if ij and there is an edge (i,j),0otherwise L_{ij}= \begin{cases} k_i &\text{if } i=j,\\ -1 &\text{if }i\neq j \text{ and there is an edge } (i,j),\\ 0 &\text{otherwise} \end{cases}

We can solve this diffusion equation by writting any initial condition as a linear combination of eigenvectors of LL, and the coefficients will then evolve exponentiall with exponents given by the eigenvalues of the matrix.

The graph laplacian can be related to the edge incidence matrix, B\mathbf{B}. This is defined by first labelling the ends of each edge as 11 and 22. Then:

Bij={+1if end 1 of edge i is attached to vertex j,1if end 2 of edge i is attached to vertex j,0otherwise B_{ij}= \begin{cases} +1 &\text{if end 1 of edge i is attached to vertex j,}\\ -1 &\text{if end 2 of edge i is attached to vertex j,}\\ 0 &\text{otherwise} \end{cases}

Then, L=BTB\mathbf{L}=\mathbf{B}^T\mathbf{B}, from which one can show that the eigenvalues of L\mathbf{L} are not only real (as it is symmetric), but also non-negative. This is an important physical property of the Laplacian, because it means the solutions of the diffusion equation only includes non-diverging solutions, which makes sense since diffusion is constructed to conserve the quantity Ψi\Psi_i.

In particular the vector 1=(1,1,1,...)\mathbf{1}=(1,1,1,...) always has eigenvalue 00 (this implies L\mathbf{L} is singular). It can be shown, that more generally, the number of eigenvectors with 00 eigenvalue is always equal to the number of components in the network. Thus the second eigenvalue of the Laplacian (when arranged in ascending order) is non-zero if and only if the network is connected. This eigenvalue is called the algebraic connectivity or spectral gap, and is useful in a technique known as spectral partitioning.

Graph theory

guillefix 13th July 2016 at 9:38pm

Graphical model

guillefix 4th July 2016 at 6:44pm

Graphics and visualization web libraries

guillefix 20th July 2016 at 1:42pm

Gravitation

guillefix 2nd June 2016 at 1:30am

Greatest lower bound

guillefix 14th July 2016 at 1:30am

Natural extension of the meet of two elements to an arbitrary Set of elements of a poset

Interpreting the Partial ordering as "less than or equal", it can be understood as the greatest point that is less than or equal to all the points in the set.

green.png

Group (algebraic structure)

guillefix 28th June 2016 at 4:41pm

See Group theory for the mathematical study of groups.

Group theory

guillefix 28th May 2016 at 11:09pm

Group-like algebraic structures

guillefix 28th June 2016 at 4:44pm

Groups of vertices (Network theory)

guillefix 11th February 2016 at 12:38am

See Measures and metrics for networks

Many networks naturally divide into groups. These are substructures that are prominent for some reason. Simple examples are:

  • clique: a maximal subset of the vertices in an undirected network such that every member of the set is connected by an edge to every other.
  • Generalizing the above, a k-plex of size nn is a maximal subset of nn vertices within a network such that each vertex is connected to at least nkn-k of the others. We could define this using fractions of others as well.
  • A k-core is a maximal (i.e. it is not a subset of a k-core) subset of vertices such that each is connected to at least kk others in the subset. A way to find them is to successively remove vertices with degree less than kk.
  • k-clique: a maximal subset of vertices such that each is no more than a distance kk away from any of the others via the edges of the network. See also k-club and k-clanl

Many other definitions related to the idea of "groups"

Generalization of components: k-component is a maximal subset of nodes such that each is reachable from each of the other by at least kk vertex-independent paths. Equivalently no vertices in this set can be disconnected by removing less than kk vertices see cut sets. A variant can be defined using edge-independent paths.

Hacking

guillefix 29th April 2016 at 4:02pm

Hardware for deep learning

guillefix 9th July 2016 at 4:21am

Hausdorff space

guillefix 14th July 2016 at 3:32am

A Topological space is Hausdorff, if for any pair of points x,yXx, y \in X there exists open sets O1O_1 and O2O_2 such that xO1x \in O_1, yO2y \in O_2 and O1O2=O_1 \cap O_2 = \emptyset.

Health

guillefix 17th May 2016 at 1:13am

Heap (data structure)

guillefix 30th June 2016 at 1:29am

Hebbian theory

guillefix 24th June 2016 at 1:40am

"Cells that fire together, wire together." However, this summary should not be taken literally. Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can only occur if cell A fires just before, not at the same time as, cell B.

https://en.wikipedia.org/wiki/Hebbian_theory in Neuroscience

Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.

A fires just before, not at the same time as, cell B. This important aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.[3] The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells, and provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

Cell Assembly Signatures Defined by Short-Term Synaptic Plasticity in Cortical Networks

The cell assembly (CA) hypothesis has been used as a conceptual framework to explain how groups of neurons form memories. CAs are defined as neuronal pools with synchronous, recurrent and sequential activity patterns

hello world

guillefix 17th January 2016 at 4:27pm

Hello World

Hidden Markov model

guillefix 4th July 2016 at 6:44pm

A Markov process, often a Markov chain, that, through a mapping, produces an output that models some Stochastic process.

A Hidden Markov Model (HMM) is a discrete-time finite-state homogenous Markov chain observed through a discrete-time memoryless invariant channel.

This is used, for instance, in Machine learning

STATISTICAL ANALYSIS OF HIDDEN MARKOV MODELS

High-energy astrophysics

guillefix 7th May 2016 at 6:16pm

History

guillefix 6th February 2016 at 1:23am

History of art

guillefix 6th February 2016 at 1:25am

History of cinema

guillefix 25th June 2016 at 4:13am

History of film

Le Prince, first

...

Edison, Lumiere

Real film continuity, involving action moving from one sequence into another, is attributed to British film pioneer Robert W. Paul's Come Along, Do!, made in 1898 and one of the first films to feature more than one shot.

In 1900, continuity of action across successive shots was definitively established by George Albert Smith and James Williamson, who also worked in Brighton. In that year Smith made As Seen Through a Telescope, in which the main shot shows street scene with a young man tying the shoelace and then caressing the foot of his girlfriend, while an old man observes this through a telescope. There is then a cut to close shot of the hands on the girl's foot shown inside a black circular mask, and then a cut back to the continuation of the original scene. Even more remarkable is James Williamson's Attack on a China Mission Station (1900). The first shot shows Chinese Boxer rebels at the gate; it then cuts to the missionary family in the garden, where a fight ensues. The wife signals to British sailors from the balcony, who come and rescue them. The film also used the first "reverse angle" cut in film history.

George Albert Smith (film pioneer)

Science fiction and special effects. Georges Méliès

Sergei Eisenstein

History of deep learning

guillefix 25th May 2016 at 2:15am

History of Earth

guillefix 8th July 2016 at 3:17am

Geologic time scale

Divided in Geological periods

Precambrian

Hadean

Archean

Proterozoic

Phanerozoic

Phaneros -> phenomenon; zoic -> animals. Animal phenomena

Paleozoic

Old animanls

Cambrian

Ordovician

Silurian

Devonian

Carboniferous

Permian

Mesozoic

Middle animals

Triassic

Jurassic

Cretaceous

Cenozoic

keno -> new (from greek). New animals

Paleogene

Neogene

Quaternary


http://www.ucmp.berkeley.edu/help/timeform.php

History of evolutionary thought

guillefix 25th April 2016 at 10:02pm

History of Humankind

guillefix 26th March 2016 at 3:56am

A.k.a. "Universal history".

history of Japan

History of life

guillefix 21st May 2016 at 9:21pm

History of mathematics

guillefix 27th June 2016 at 10:36pm

History of science

guillefix 8th April 2016 at 5:17pm

History of the videocamera

guillefix 25th June 2016 at 4:14am

See Videocamera, History of cinema

Muybridge

Le Prince, first

...others

Edison, Lumiere

History, Cosmography & Cosmology

guillefix 6th February 2016 at 1:32am

A description of the Cosmos, from the physical, cosmic, non-anthropocentric, perspective.

Its State, the Information it holds, i.e., what is actually found and observed in it, both in the vastness of space and the immensity of time.

Homoplasy

guillefix 19th April 2016 at 9:42pm

See Evolution

Homplasy is the appearance of similar traits in organisms when their most common recent ancestor didn't have them.

See Wiki article

The causes of homoplasy are sometimes elaborated in the context of the difference between:

  • parallel evolution, where homoplasy is thought to occur because

two organisms share a common genetic heritage, and

  • (proper) convergent evolution, where the same solution is found by different

genetic means, and where the primary causal force is usually attributed to selection

See Convergence, adaptation, and constraint. This binary distinction may be too simplistic (see [36–39] for some recent discussion). For the GP map bias in the Arrival of the frequent, the reason for this repetition is not a contingent common genetic history, nor the Allmacht (german for omnipotence) of selection [40], but rather a different kind of ‘deep structure in biology’ [41].

Hubs and authorities (Network theory)

guillefix 10th February 2016 at 11:51pm

See Measures and metrics for networks

One can distinguish two types of important nodes in directed networks. We describe them for the case of an information network, like WWW first:

  • authorities are nodes that contain useful information on a topic of interest
  • hubs are nodes that point us to the best authorities

This idea was implemented by Kleinberg into the hyperlink-induced topic search or HITS algorithm. The mathematical definitions that tries to capture the above intuition are:

  • authority centrality: vertex pointed by many hubs (i.e. by many nodes with high hub centrality)
  • hub centrality: vertex points to many authorities (i.e. vertices with high authority centrality).

Mathematically,

x=αAy\mathbf{x}=\alpha\mathbf{A}\mathbf{y}

y=βATx\mathbf{y}=\beta\mathbf{A}^T\mathbf{x}

where x\mathbf{x} and y\mathbf{y} are the authority and hub centralities, respectively. These equations combine to show that these centralities are in fact the eigenvectors of AAT\mathbf{A}\mathbf{A}^T and ATA\mathbf{A}^T\mathbf{A}, respectively, with the same eigenvalue (which must be the leading one, suing similar arguments as cases above, and which is equal to (αβ)1(\alpha \beta)^{-1}. β\beta (or alphaalpha, but not both) is a free parameter that can be chosen to be 11 as we don't care about relative centralities.

This connection means that these centralities are similar to the eigenvector centralities for the cocitation and bibliographic coupling network, respectively (see Mathematics of networks).

Huffman coding

guillefix 28th June 2016 at 1:21am

A code used in Data compression that is optimal, in the sense that it achieves the entropy limit (within less than one bit).

https://en.wikipedia.org/wiki/Huffman_coding

https://www.cs.cf.ac.uk/Dave/Multimedia/node210.html

Algorithm

1. Initialization: Put all nodes in an OPEN list, keep it sorted at all times (e.g., ABCDE).
2. Repeat until the OPEN list has only one node left:
(a) From OPEN pick two nodes having the lowest frequencies/probabilities, create a parent node of them.
(b) Assign the sum of the children's frequencies/probabilities to the parent node and insert it into OPEN.
(c) Assign code 0, 1 to the two branches of the tree, and delete the children from OPEN.

In the animation below, the blue nodes are in the OPEN list., at every iteration we choose the nodes with the two lowest frequencies within the blue nodes (with preference with those not yet in the tree, if equal frequency).

Human

guillefix 7th May 2016 at 3:49am

Human anatomy

guillefix 8th April 2016 at 6:03pm

Human behaviour

guillefix 5th July 2016 at 3:57am

Human geography

guillefix 26th March 2016 at 7:27pm

Human hearing

guillefix 8th April 2016 at 6:03pm

http://www.open.edu/openlearn/science-maths-technology/science/biology/hearing/content-section-3.3

http://vaczy.dk/htm/acoustics.htm

http://www.newmusicbox.org/articles/The-Musical-Ear/

Actually, which sounds sound nice together is apparently a far more complex question than rhythms (not an expert here just curious). The main explanation I can find (given that there are many things yet unknown, such as the roles spatial, temporal, and neural encoding play) is mentioned here: http://www.newmusicbox.org/articles/The-Musical-Ear/ The basilar membrane is known to certainly play a role in pitch perception. Now, most times we hear a frequency we hear it from some object (like an instrument) that generates harmonics of that frequency (ultimately due to ratios of lengths and linear dispersion relations). Now, harmonic frequencies (with simple ratios as you say) share a lot of harmonics themselves. These will excite the basilar membrane in the same spots. And as long as the harmonics don't differ by more than about 10Hz, they will be indistinguishable (as far as the basilar membrane is concerned, due to bandwidth). However, if you make to non harmonic sounds with two non-conmensurate objects, a lot of their harmonics will be very close, and within the so called critical frequency, which has been shown to cause dissonant perception. Now, a plausible theory for why even pure sinusodial waves at simple ratios tend to sound better (though I did the test now, two non-harmonic sin waves don't sound nearly as bad as two non-harmonic piano notes), may be that the brain develops neuronal networks to prefer these sounds.

Your theory of the brain detecting the rhythms is still interesting though, and may be relevant to the "temporal coding" theoreis that have been proposed, but I have not read much about those..

http://plasticity.szynalski.com/tone-generator.htm

The Neural Code of Pitch and Harmony

https://en.wikipedia.org/wiki/Basilar_membrane

https://en.wikipedia.org/wiki/Pitch_%28music%29#Theories_of_pitch_perception

https://en.wikipedia.org/wiki/Consonance_and_dissonance#Physiological_basis_of_dissonance

https://en.wikipedia.org/wiki/Music_psychology#Neural_correlates_of_musical_training

https://en.wikipedia.org/wiki/Psychoacoustics#Music

Music and measure theory The reason it works so well to have twelve notes in the chromatic scale is that powers of the twelveth root of two tend to be within a 1% margin of error of simple rational numbers. And it's good to have powers of the same factor for the notes, because the brain perceives separation between frequencies logarithmically not linearly.

Human physiology

guillefix 1st July 2016 at 7:53pm

Human positions

guillefix 5th July 2016 at 3:57am

Human positions refer to the different physical configurations that the human body can take.

https://en.wikipedia.org/wiki/List_of_human_positions

Human vision

guillefix 14th July 2016 at 6:06pm

Human voice

guillefix 12th June 2016 at 4:35am

Human-computer interaction

guillefix 30th June 2016 at 1:38am

Humanities

guillefix 7th May 2016 at 4:36am

The knowledge regarding all the natural aspects and artificial constructs related to Humanity, the collective of the Human species, evolved in Planet Earth.

We include here what is normally known as humanities, but also Social sciences that treat aspects of Humanity (so for examples scientific studies of animal societies are not part of humanities, although part of social sciences)

Although humans are the origin of our currently known complex social systems, transhuman advancements (like the development of AI, Mind uploading, or Genetic engineering), or the discovery of Extraterrestrial life make may future non-human agents as, or even more important, in society. Cosmos will need to then be upgraded with a new term more encompassing than "Humanities". Society is one candidate for such a general term, and indeed Social sciences have gone beyond standard humanities in studying social aspects of non-humans. In any case, our current social systems are still mostly human-centered, and the centrality of this tiddler represents that state of affairs.

Note that even as animal right movements are succeeding in giving animals fundamental rights of living and sentient beings (as the right to be protected from suffering), animals will probably still play a secondary role in society, as humans are generally more complex and intelligent in their behaviour.


Humans

The study of Humans per se (whether individually, or collectively in societies) is called Anthropology.

Humans are part of the Tree of life, and their natural aspects are thus studied in Biology, in particular in biological anthropology. On the other hand, the collective of artificial constructs created by Humanity is known as Culture (studied by cultural anthropology).

Human society

Humans organize themselves in societies. The organization often involves systems of Law (~what can* be done), Politics (~what should we do), and Economics (~how to do we get what we need).

* what can be done, here of course refers to not just what can be done by physical laws, but what is permitted by Law, the societal construct that dictates what humans are allowed or not to do, by society

Physical aspects of these societies are mostly studied in Geography, particularly in Human geography.

Human communication is a crucial aspect of the human condition, and of the resulting societies. It is studied in Linguistics (and in particular, for humans, in linguistic anthropology)

Hund's rules

guillefix 22nd June 2016 at 5:18am

https://en.wikipedia.org/wiki/Hund%27s_rules

http://hyperphysics.phy-astr.gsu.edu/hbase/atomic/hund.html

The first two rule are mostly caused by Coulomb interaction.

The third is caused by spin-orbit coupling.

Hunting

guillefix 27th February 2016 at 2:57pm

Hydrodynamic slip

guillefix 16th June 2016 at 9:38pm

Hydrophobicity

guillefix 14th June 2016 at 7:13pm

Hyperbolic fixed point

guillefix 3rd June 2016 at 7:10pm

A fixed point is called hyperbolic if none of the eigenvalues of the Jacobian evaluated at the point have zero real part.

i.i.d.

guillefix 4th July 2016 at 11:07pm

identically and independently distributed

Ideas for understanding the simplicity bias in finite state transducers

guillefix 5th July 2016 at 5:20pm

See Simplicity bias in finite state transducers

On the second question, there is actually a stumbling block due to the random FST ensemble I'm using, which consists only of accessible FSTs (of given size). Accessible means that any state can be reached from the initial state (so that there are no 'useless' states). This is in contrast to random unrestricted FSTs, where each of the K_i n out-stubs are connected to a state, independently and uniformly at random.. Answering probabilistic questions for the latter is much easier than for accessible FSTs (see attached or http://bit.ly/290fHji). I guess we could simulate random unrestricted FSTs, though I think accessible FSTs are a more interesting ensemble, because you fix the actual number of states in the automaton. Anyway, there may still be some things to say here, because in the article I attach he finds a way of relating statistics of automata to those of accessible automata, but only asymptotically, and with inequalities. There may be other approaches with Analytic combinatorics, but they are potentially quite hard.

Regarding the first question, I've been refining my ideas about loops of 'noncoding states' (with output symbols being equal). In particular, looking at the experimental results, I've noticed that bias is associated with 'absorbing regions' that contain at least one non-coding state (approximately absorbing regions also show some bias). An absorbing region is a set of states which you can reach, but which you can't leave. Now, I've found two main factors determining the frequency/neutral-set-size/designability (call this NSS) of an output of an FST that contains this:

  • The structure of the absorbing region
  • The number of steps spent in the absorbing region, call it m.

Now, I've also found that the NSS is multiplied by 2^(a*m), where a depends on the structure of the absorbing region (in an interesting combinatorial way). So the NSS \propto 2^(a*m). The proportionality constant will depend on the particular string, and the number of noncoding states it passes through, outside the absorbing region (this requires more attention).

Now, if the m output bits from the absorbing region are composed of a repeating pattern (often the case, but I can think of exceptions..), the Lempel-Ziv complexity C <= n-m + const., where n is the total number of bits, and const is the number of bits in the repeating pattern.

Under these assumptions, one can see that the frequency of an output obeys P =NSS/2^n <= 2^(-a*C + b), where I lump all the proportionality constants above in b..

The Fibonacci GP map described in the paper on constrained/unconstrained parts, is actually an example of the simplest kind of FST with the properties above. It can be implemented as a 3-state FST, with an absorbing region consisting of a single non-coding state, and no non-coding states anywhere else. Thus the arguments above work very cleanly. Unfortunately, general FSTs can show more complicated things, like:

  • regions, which are not completely absorbing, but still produce bias.
  • Several absorbing regions
  • Non-coding states outside absorbing regions
  • Absorbing regions, which don't produce simple repeating sequences. I think these won't produce as much bias

All these make complicate the picture, and should be taken into account more fully to improve the argument above. In any case, it makes sense the argument above can't be exact (except for simple cases like Fibonacci GP map), because most FSTs show a complexity/frequency plot which is not perfect, but has some noise.

Hope that wasn't too long. I think also that all this will be easier to understand with pictures...

Immunology

guillefix 11th June 2016 at 2:10pm

The branch of medicine and biology concerned with immunity, that is, the ability of an organism to resist a particular infection or toxin by the action of specific antibodies or sensitized white blood cells.

Immunology in the skin

Immutable type (programming)

guillefix 29th June 2016 at 2:29am

A Data type corresponding to a value that can't be changed.

Immutable types: understand their benefits and use them

Usable for Concurrent computing

Incompressible flow

guillefix 29th January 2016 at 12:57am

Independence of random variables

guillefix 2nd July 2016 at 3:13pm

See here: Chapter 2 Information Measures - Section 2.1 A Independence and Markov Chains

Independence of two random variables

Mutual independence

Pairwise independence

Conditional independence

https://en.wikipedia.org/wiki/Conditional_independence

See here. Note that his definition is the same as in wiki. Just divide by p(y)p(y) to see this. His example at the end is rather illustrative too.

Independent paths, connectivity, and cut sets (Graph theory)

guillefix 25th January 2016 at 11:21pm

Number of independent paths between two vertices (the connectivity) gives measure of how strongly connected they are. Paths can be vertex-independent if they share no vertex (other than starting or ending vertices), or edge-independent if they share no edge.

A vertex (edges) cut set is a set of vertices (edges) that if removed will disconnect a specified pair of vertices. A minimum cut set is the smallest such set for the vertices. For weighted networks a minimum cut set is the set of such vertices that have the least total weight.

Menger's theorem:

If there is no cut set of size less than nn, then there are at least nn independent paths.

This actually implies that size of the minimum cut set (CC) equals the connectivity of two vertices (II): C>nI>nC>n \Rightarrow I>n. ¬(C>n)¬(I>n)\neg (C>n) \Leftarrow \neg (I>n). CnInC\leq n \Leftarrow I\leq n. In particular CnI=nC\leq n \Leftarrow I = n. However if I=nI = n, we need to cut at least nn vertices/edges, so CnC\geq n. C=n=I\therefore C=n=I.

The maximum flow if the network were made of water pipes between two vertices is the number of edge-independent paths times the maximum flow of a single pipe can sustain, or pipe capacity, rr. Let II be size of minimum edge cut set. Clearly, IrIr is a lower bound for this max flow, since each independent path will independently carry max flow rr.. Also, if we remove an edge that forms part of a path between them, we must decrease the flow by at most rr. Thus, if we remove the II edges from the minimum cut set, we decrease the flow by at most IrIr, but this must remove all flow. Hence total capacity is at most IrIr, which is then an upper bound. IrIr is both an upper and lower bound, and hence the maximum flow must equal IrIr. This is the max-flow/min-cut theorem, for special case of the same capacity for all pipes.

The max-flow/min-cut theorem can be generalized to weighted networks. This can be shown by transforming the weighted network into a multigraph.

Applications

This result is useful because some computer algorithms (maximum flow algorithms) can compute maximum flow easily. But, by result above, they also calculate the minimum cut set size, and the connectivity, which can be used to find clusters in networks. This is in fact the current standard numerical method for connectivities and cut sets.

The max-flow/min-cut theorem has been used in a polynomial-time algorithm for finding ground states of the thermal random-field Ising model. See reference [257] in Newman's book.

Industrial engineering

guillefix 7th May 2016 at 3:21am

Industrial engineering is a branch of engineering which deals with the optimization of complex processes or systems. Industrial engineers work to eliminate waste of time, money, materials, man-hours, machine time, energy and other resources that do not generate value. According to the Institute of Industrial and Systems Engineers, they figure out how to do things better, they engineer processes and systems that improve quality and productivity

Industry

guillefix 7th May 2016 at 3:33am

Industry is the production of goods or related services within an economy, by processing raw materials.

This process is more generally called Manufacturing. Industry is therefore, manufacturing in the context of an economy

https://en.wikipedia.org/wiki/Industry

Influence maximization in complex networks

guillefix 11th June 2016 at 1:56am

Influence maximization/optimization in complex networks through optimal percolation

Keywords: Social dynamics, Networks, Percolation

Influence maximization in complex networks through optimal percolation

Annotated paper

I think this article will be of interest to people investigating social or other networks over which something is transmitted over the edges (whether these are infections, messages, opinions...). These arise in many problems in science and engineering, especially those involving complex social networks. In these networks one can often assign importance to nodes by seeing how much does their removal disrupt the potential spread of the unit being transmitted across the network.

In particular, the optimal influence problem tries to maximize the influence on the network by affecting the least number of nodes. This article presents a novel algorithm that can find very good approximate solutions to this problem, which is generally NP-hard. They do this by first expressing the problem in terms of a percolation process, so that maximum influence corresponds to making the giant connected component disappear with the least number of nodes removed. Although for small networks this can be tackled using methods from statistical mechanics, an adaptive algorithm is more effective for large networks. They demonstrate its effectiveness, as well as its superiority against other heuristic algorithms, in both synthetic and real networks.

Although I think the article does a good job at summarizing the results in the 4 pages of the letter, I think some more explanation on the connection of the optimal influence problem and their mathematical formulation would be useful to aid the reader's understanding (leaving the SI only for non-crucial details). For instance, I think that it should be mentioned that the stability of the G=0G=0 solution, under locally tree like assumption, is what determines whether the GCC is present or not, for large networks. Optimal influence chooses the minimum number of nodes that make the G=0G=0 solution. Similarly, I think the vector w0\mathbf{w}_0 is introduced without explaining what it represents (a perturbation to the order parameter vector vijv_{i\rightarrow j}.

'Smaller is smarter' in superspreading of influence in social network

Containing Epidemic Outbreaks by Message-Passing Techniques

Information

guillefix 13th July 2016 at 8:58pm

Information measures

guillefix 15th July 2016 at 9:38pm
  • Entropy
  • Joint entropy
  • Conditional entropy
  • Mutual information (the difference between the entropy and the conditional entropy. I.e the decrease in uncertainty on a random variable when you learn about another random variable. I.e. the information you gain on a random variable from another RV) Measure of dependence.
  • Conditional mutual information
  • Relative entropy. Mututal information is a special case. Defines a measure of "distance" between probabiliy distributions. Applications in estimating hypothesis testing errors and in large deviation theory.

Shannon's Information Measures

explanation

Continuity of Shannon's Information Measures

Some Useful Information Inequalities

Three approaches to the quantitative definition of information

Information science

guillefix 8th July 2016 at 1:34am

Information source

guillefix 4th July 2016 at 11:12pm

See Data transmission.

An information source is often modelled as a discrete-time stochastic process, so it is a sequence of Random variables, taking values in a set called the source alphabet.

An stationary information source is one corresponding to an stationary stochastic process, so that any {finite block of random variables} and {any of its time-shift versions} have exactly the same joint distribution

An important property of an information source is its Entropy rate

Types of information source

Discrete memoryless source

Markov input process

Information system

guillefix 30th June 2016 at 1:42am

Information technology

guillefix 7th May 2016 at 1:35am

Information theory

guillefix 14th July 2016 at 3:48am

Information theory

Information Theory, Information Theory (CUHK)

Entropy/Information

Coding theory

A code is a representation of information/data.

Coding theory (and/or coding methods) is the study of the properties of codes and their fitness for a specific application. These applications include Data transmission, Data compression, Cryptography, and Network information theory

Data transmission

See Source-channel separation theorem

The main problem of study in data transmission theory is: for a particular Communication channel, find code so that data transmission rate is as high as possible, while receiver receives the information with negligible probability of error.

The limit in data transmission rate turns out to be the Channel capacity, as established by the Channel coding theorem.

Data transmission is part of the broader area of study called Communication theory, which includes consideration of the information source and destination.

Data compression

Study of theoretical limits and implementation of codes that make average length of the value of a random variable as short as possible, whether in a lossless, or lossy way.

The limit in the average length of codewords in a lossless code turns out to be the entropy, as established by the Source coding theorem

Limits in lossy codes are established in Rate compression theory

Cryptography

Network information theory

Algorithmic information theory

Kolmogorov complexity. Shortest program that will produced desired output in Turing machine. Occam's razor


More related areas


Shannon - A Mathematical Theory of Communication

General theory of information transfer: Updated

Entropy reduction

Storing and Transmitting Data: Rudolf Ahlswede’s Lectures on Information ...

Information Theory, Combinatorics, and Search Theory

Theory of identification

Theory of ordering (see Entropy reduction)

Search theory

YB videosMIT videos

Infrastructure

guillefix 1st July 2016 at 11:09pm

Injective function

guillefix 4th July 2016 at 11:30pm

An injective function, also called one-to-one, is a function F(x)F(x) such that xyF(x)F(y)x \neq y \Rightarrow F(x) \neq F(y).

Inorganic chemistry

guillefix 21st January 2016 at 8:55pm

Integrating symbols into deep learning

guillefix 22nd May 2016 at 4:15pm

Deep learning

Integrating Symbols into Deep Learning

Abstract of talk: Computer Science is the symbolic science of programming, incorporating techniques for representing and reasoning about the semantics, correctness and synthesis of computer programs. Recent techniques involving the learning of deep neural networks has challenged the "human programmer" model of Computer Science by showing that bottom-up approaches to program synthesis from sensory data can achieve impressive results ranging from visual scene analysis, expert level play in Atari games and world-class play in complex board games such as Go. Alongside the successes of Deep Learning increasing concerns are being voiced in the public domain concerning the deployment of fully automated systems with unexpected and undesirable behaviours. In this presentation we will discuss the state-of-the-art and future challenges of Machine Learning technologies which promise the transparency of symbolic Computer Science with the power and reach of sub-symbolic Deep Learning. We will discuss both weak and strong integration models for symbolic and sub-symbolic Machine Learning alongside ongoing work on applications in this area.

Integratint symbols in deep learning talk notes:

  • Motivation
    • Transparency. Easily interpretable..
      • Comprehensibility test. Given a program, ask questions about it.. How well someone asnwers the questions..
      • Depends on how our own human minds work..
      • Can we even understand some of the problems..
      • Alternative: Machines that teach us how they work would be wonderful, because at the moment we need to make the effor to interpret them.
    • Computer science. Clear semantics, verification, etc.
    • Deep learning. Very different from rest of CS
    • Royal society + others... Public concern
    • Neet to integrate CS transparency and power of DL
  • Deduction and programming
    • Howard-Curry correspondance.
    • Proofs as programs!
    • Used in verification and synthesis of programs
  • Machine learning, deduction and programming
    • Logic programming (Kowalski 1975). Program <> set of clauses in logic...
    • Inductive logic programming (Shapiro 1982, Muggleton 1991). Start with prior knowledge, hypothesize from data addition to knowledge, if hypothesis is verified as sufficiently valid, you add to your knowledge.
    • Inverse resolution (Muggleton and Buntine 1988). Resolution?.
    • Inverse entailment (more efficient). (muggleton 1995)
    • Problems with recursion and predicate invention (muggleton et all 2011)
    • Meta-interpretative learning (muggleton et al 2015). Make it into higher order logic framework.
    • target theory....
    • Hmm, gotta learn more about logic
  • Symbolic and non-symbolic machine learning.
    • Neural Turing machines! (Graves et al 2014). NIPS 2015 workshop. Still doesnt make things necessarely transparent..
    • Bayesian-neural integration. Sum-Product Markov networks (Domingos 2015)
    • ILP-neural integration. Bottom-clause Neural Nets (Garcez 2014)
  • Applications
    • Sensory
      • Staircase, Euclid project, microbe movies now
    • Motor
      • UCAI 2013. Build stable wall
      • UCAI 2015. Learning efficient strategies.
    • Language applications.
      • Learning formal grammars (MLJ 2014).
      • Dependent srting transformations (ECAI 2014). Transparent?..
    • What next for meta-interpretive learning
      • Neuro-logical Turing machines
      • Problem decomposition. One of the central issues in programming. Predicate invention is part of this
      • Object invention. Intrinsic to learning and perception. Introducing new entities into language. Hard problem to make the meaningful
      • Large-scale background knowledge: How can learners scope relevance of background concepts?
      • Probabilistic reasoning.Bayesian.. Single examples?

Intelligence augmentation

guillefix 19th July 2016 at 4:58pm

Interdisciplinary

guillefix 5th April 2016 at 3:23pm

http://michaelnielsen.org/

Facebook, twitter news feed...

Interesting papers on statistical physics and complex systems

guillefix 11th May 2016 at 2:36am

Interfacial forces

guillefix 2nd July 2016 at 5:27pm

Intermolecular forces

guillefix 28th April 2016 at 11:16pm

Intermolecular forces are usually composed of a repulsive and an attractive part:

  • The repulsive part is essentially quantum mechanical (in particular, the Pauli exclusion principle plays a key role). However, often we approximate it by a hard-sphere infinitely steep potential, because the real potential is much steeper than other potentials in most problems.
  • The attractive part shows more variability, but it almost invariably is of electrostatic origin.

If the potential indeed has an attractive component (the repulsive one always exists), then the potential will present a minimum, corresponding to an equilibrium state, known as a bond. The depth of the potential minimum (relative to the thermal energy, kBTk_B T) determines the strength, or stiffness of the bond. One often makes a distinction between:

  • Chemical bonds or permanent bonds (see here) are those with bond energy is much larger than the thermal energy.
  • Physical bonds or temporary bonds are those with bond energy comparable to, or just a bit bigger than the thermal energy.

Common intermolecular forces

  • Van der Waals forces. Due to fluctuations of electric dipole moment of electron cloud inducing correlated fluctuations on nearby molecules. Goes like 1/r61/r^6, and it is kBT\sim k_BT at room temperature. Approximately isotropic
  • Hydrogen bonds. Hydrogen when covalently bonded to a more electronegative atom (like Oxygen) will develop a positive charge, while the electronegative atom will develop a negative charge. The hydrogen atom in some molecule can then attract an electronegative atom in another molecule. 1 to 5kBT\sim 1\text{ to }5 k_B T.
  • Hydrophobic interaction. Water in its liquid state forms a 3-dimensional network of hydrogen-bonded molecules, that leave more space for each to fluctuate in position, thus reducing its entropy (similar to how a colloid suspension will "crystallize" above a critical concentration for the same entropic reasons). A large molecule will disrupt this configuration, and will effectively reduce the entropy of the system. Therefore, there will be an effective force (entropic force) pushing large molecules together so that this disruption is the least, and so entropy is maximized, as it will in equilibrium. This is the hydrophobic interaction. Question: doesn't the fact that large non-polar molecules attract via van der Waals only while water (polar) attracts via Hydrogen bonding, but barely via van der Waals, and therefore these two different kinds of molecule barely attract, not also play a role in they wanting to clump together with like kinds, but separately from unlike kind?

Internet

guillefix 9th April 2016 at 6:09pm

Internet of Things

guillefix 20th May 2016 at 2:55am

interstellar_wallpaper

guillefix 17th January 2016 at 4:08pm

Invariant manifolds in dynamical systems

guillefix 11th June 2016 at 10:37pm

Dynamical systems

A fixed point (and other structures too, if suitably generalized) of a dynamical system has three kinds of invariant manifolds:

  • Unstable manifold
  • Stable manifold
  • Center manifold

See Chapter 3 in Wiggins book.

ChaosBook.org chapter Stretch, fold, prune - Stable, unstable manifolds

ChaosBook.org chapter Stretch, fold, prune- Plotting an unstable manifold

Invariant measure

guillefix 7th July 2016 at 7:01pm

a Measure that is preserved by some Function

We say that the function f:XXf: X \rightarrow X preserves a measure μ\mu on (XB)(X\mathcal{B}), if μ(f1B)=μ(B)\mu(f^{-1} B) = \mu(B), for all BBB \in \mathcal{B}. The function is then said to be μ\mu-preserving, or μ\mu is said to be ff-invariant.

https://en.wikipedia.org/wiki/Invariant_measure

Ion channel

guillefix 2nd July 2016 at 2:18pm

Ising model

guillefix 16th June 2016 at 8:36pm

Lattice with spins interacting with nearest neighbours to favour either alignment and anti-alignment, as a minimal model of a ferromagnet. It has many connections with other systems in Statistical physics, and Complex systems, due to the abstract nature of the model.

1D Ising model was solved by Ising and others.

A major breakthrough in statistical physics was the exact solution of the Ising model in two dimensions [107]. Onsager gave in 1944 a complete solution of the problem in zero external magnetic field.

But in three dimensions, Istrail has shown [108] that essentially all versions of the Ising model are computationally intractable across lattices and thus the 3D Ising model, in its full, is NP-complete.

For another model with many interesting connections, see Spin glass

IT innovation

guillefix 7th May 2016 at 1:31am

Janus swimmer

guillefix 17th June 2016 at 1:11am

A particular kind of Self-propelled particle with a kind of asymmetry corresponding to half of the particle having one property, and the other a different one. Most often a Janus swimmer refers to spherical colloids, where one hemisphere is coated with some material, and the other with a different one (or just exposing the material of the colloid itself).

A particular kind is the Catalytic conductor-insulator Janus swimmer

JavaScript

guillefix 15th July 2016 at 9:37pm

The language of the web

A Programming language, often used for Frontend web development.

Redux is a nice functional programming-like framework for React. Learn redux.

Javascript in one pic

JS libraries

https://github.com/dominictarr/hyperscript

Functional programming on JavaScript

JS animation libraries

Meteor (JS)

Math JS libraries

Graphics and visualization web libraries

5 JAVASCRIPT LIBRARIES FOR JULY 2016

Other

https://jsx.github.io/

Reactive programming: http://reactivex.io/

OOP on JS: https://github.com/jneen/pjs

http://requirejs.org/docs/commonjs.html

http://requirejs.org/

http://www.typescriptlang.org/Tutorial

http://brackets.io/

https://react.rocks/

https://material.angularjs.org/latest/

https://getmdl.io/

https://www.polymer-project.org/1.0/

Animation and graphics: https://www.khanacademy.org/computing/computer-programming https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL

Data structures: http://jnuno.com/tree-model-js/

Functional programming: Functional programming in Javascript Functional progr JS library: http://ramdajs.com/0.19.1/index.html. https://lodash.com/. This looks awesome: http://elm-lang.org/

Other tools: https://babeljs.io/docs/setup/ To compile ECMAscript 2015 to normal compatible JS !! Meteor has ES6 package already

Testing: https://github.com/sindresorhus/ava for concurrent testing


Other JS-related languages

TypeScript

http://stackoverflow.com/questions/14412164/is-there-a-tool-to-convert-javascript-files-to-typescript

http://www.typescriptlang.org/

Coffeescript

JavaScript mathematics packages

guillefix 20th June 2016 at 5:40pm

Jekyll

guillefix 27th June 2016 at 10:49pm

To deploy: bundle exec jekyll serve --watch

Join operation

guillefix 14th July 2016 at 1:26am

A join, \vee is an operation defined on elements of a poset PP (not necessarily all of them) defined as:

The join (or Least upper bound) of a,bPa, b \in P is an element abpa \vee b \in p such that:

(a) aba \vee b is an upper bound of aa and bb: thus aaba \preceq a \vee b and babb \preceq a \vee b;
(b) aba \vee b is the least such upper bound: i.e., if there exists cPc \in P such that aca \preceq c and bcb \preceq c then abca \vee b \preceq c.

Note that, if it exists, a join is necessarily unique.

See also Lattice (algebraic structure)

Joint entropy

guillefix 3rd July 2016 at 2:10pm

In Information theory, the joint entropy of a pair of Random variable XX and YY is defined as:

H(X,Y)=x,yp(x,y)logp(x,y)H(X,Y) = \sum_{x,y} p(x,y) \log{p(x,y)}

Joint entropy

JS animation libraries

guillefix 15th July 2016 at 9:37pm

Animation: http://laughinghan.github.io/radiance/

Awesome: timeline-based web animations with gui: https://spiritjs.io/

Also for animation: http://anime-js.com/

Jurisprudence

guillefix 8th April 2016 at 8:27pm

Jurisprudence is the science, study and theory of law.

https://en.wikipedia.org/wiki/Jurisprudence

Law - Springer

See Law

Jury stability criterion

guillefix 18th May 2016 at 6:53pm

One can use the Jury test to find if the roots of a polynomial are inside the unit circle, which is useful for stability analysis of Nonlinear maps. This test turns out to be useful in stability analysis of discrete time systems in control theory.

Jury test

Given a quadratic equation of the form:

P(λ)=λ2+a1λ+a2=0P(\lambda) = \lambda^2 + a_1 \lambda +a_2 =0

Both eigenvalues fall within the unit circle iff these three conditions hold:

  • P(+1)>0P(+1) >0
  • P(1)>0P(-1) >0
  • a2<1a_2<1

The way to show this is to divide the problem in two cases (given a1,a2a_1, a_2 are real):

  • Eigenvalues real: Then by a simple diagram, one can show that if the parabola crosses the x-axis, then λ1|\lambda_1|, λ2|\lambda_2|<1 is equivalent to P(+1)>0P(+1) >0, P(1)>0P(-1) >0. Also as a2=λ1λ2a_2 = |\lambda_1 \lambda_2| condition three holds.
  • Eigenvalues complex: Then λ1=λ2\lambda_1 = \lambda_2^*, and a2=λ1λ2=λ1λ1=λ12a_2=\lambda_1\lambda_2 = \lambda_1 \lambda_1^* = |\lambda_1|^2. Therefore a2<1λ1<1a_2 <1 \Leftrightarrow |\lambda_1| <1 (and so λ2<1|\lambda_2| <1 too). The two other conditions are immediatelly satisfied because the parabola doesn't cross the axis when eigenvalues are complex.

K-theory

guillefix 22nd May 2016 at 3:46pm

K-theory is, roughly speaking, the study of certain kinds of invariants of large matrices...

K-Theory Past and Present

Examples of abelian invariants are traces and determinants

k-vertex rule percolation process

guillefix 13th June 2016 at 8:00pm

An Explosive percolation process that is based on chosing kk vertices at random and adding edges among those vertices according to some rule.

kk-vertex rules are actually a generalization of mm-edge rules (aka Achlioptas process) because a mm-edge rule can be constructed from a nn-vertex rule, where n2mn \geq 2m, which chooses n2mn\geq 2m vertices at random (possibly repeating, but still being able to have mm distinct edges), and then choose mm edges at random within these 2m2m vertices. Note that we need n2mn\geq 2m so that we don't restrict the chosen edges to have some vertex in common.

mm-vertex rule (as defined here): In processes following an mm-vertex rule, the agent is presented with the random list (set) vmv_m of vertices, and, unless two or more are already in the same component, must add one or more edges between them, according to any deterministic or random rule that depends only on the history.

Some kk-vertex rules are examples of Non-self-averaging percolation process, showing novel supercritical phenomena, like stochastic staircases!

In Achlioptas process phase transitions are continuous, it was shown that the Percolation phase transition for processes following a vertex rule was continuous. However, they can still show some discontinuity arbitrarily close to the critical point (see Non-self-averaging percolation process).

Katz centrality

guillefix 16th February 2016 at 1:08am

See Measures and metrics for networks

Katz centrality solves the problem posed above by giving all vertices a "free" centrality:

x=αAx+β1\mathbf{x}=\alpha\mathbf{A}\mathbf{x}+\beta \mathbf{1} ....Eq. 2

or rearranging and setting β=1\beta=1, because all we care is about relative centralities:

x=β(1αA)11=(1αA)11\mathbf{x}=\beta (1-\alpha\mathbf{A})^{-1} \mathbf{1}=(1-\alpha\mathbf{A})^{-1} \mathbf{1}

This is the Katz centrality. Often one computes this not by inverting the matrix (which requires O(n3)O(n^3) computations), but by iterating using Eq. 2 (which requires just mm multiplications per step (number of nonzero elements of AA, often less steps overall).

A useful extension is to take β1β\beta \mathbf{1} \rightarrow \vec{\beta}, i.e. give each node possibly a different weight maybe expressing some non-network importance


By Taylor expanding it we can see it is like Eigenvector centrality, but taking into account paths of all lengths, but with with a weight.

katz_similarity.png

guillefix 13th February 2016 at 1:27pm

Kernel linear regression

guillefix 9th July 2016 at 3:58am

Regression using certain basis function (i.e. find coefficients for a certain linear combination of these that fits the training data). Standard ones are polynomials (see Weirestrass approx theorem, but possible terms become very large as we increase degree).

Can also use Gaussians, or radial basis functions (RBFs).

Once kernel functions are used, then can use same methods as for linear regression. Basically, we replace each input datum with the kernel functions evaluated at the input datum.

Kevin's circulation theorem

guillefix 22nd June 2016 at 3:18am

Kinase

guillefix 22nd April 2016 at 11:58pm

https://en.wikipedia.org/wiki/Kinase

A kinase is an Enzyme that catalyzes the transfer of phosphate groups from high-energy, phosphate-donating molecules to specific substrates

Kinematic reversibility in fluid dynamics

guillefix 30th April 2016 at 1:12pm

(from my email)

In case anyone cares, I think I worked out the irreversibility thing (it's been good revision trying to figure it out, so may be good revision for you too:P):

1. All flow is time-reversible in the theoretical sense that if you reverse all particle's trajectories, you get another physical flow. This is not in general true, though, if you include the viscosity term (although it can be), as this is a term that is there to account for the degrees of freedom we aren't accouning for, so it is in general not time-reversible, just like friction isn't (balls don't just start spountaneously from rest and cooling the floor slightly, this is just the 2nd law).

2. What we usually talk about in fluid mechanics, though, is whether the flow is time-reversible in practice (I think this is called kinematic reversibility). What this means is whether or not I can perform the above theoretical operation of reversing the particle's trajectories, by performing a practically reasonable action. Such a reasonable action is usually changing the boundary conditions of the flow, as that is easy to do. In the case of the die drops, this is what they control, the surfaces of the cylinder Now, the boundary conditions determine a certain steady-state flow. The important thing about viscous flows (low Re), is that they reach stead state very quickly, so that most of the particles trajectories is spent in steady state, and so most of the trajectory is determined by the boundary condition (b.c.) we control. So we can just turn the cylinder one way, and then the other, and they will have very nearly retraced their steps (I think they turn the cylinder slowly because that keeps Re=UL/nu small). In higher Reynolds number flows, however, the time the system takes to reach the steady state set by the b.c.s is very significant. Therefore a significant portion of the particles' trajectories is spent in these transient periods. Now, I think the reason these transient periods break (practical) time reversal is because they are not deterimed completely by the b.c.s you control. I think this is because turbulence will most probably set off in them, and as we know, turbulence is random, i.e., out of your control, and thus you can't reverse that (significant) part of the flow. The reason I think turbulence will set off is that when you start moving the cylinder (in the experiment with the die), the no-slip boundary condition will cause a boundary layer, which is very thin due to the low viscosity. This sharp gradient in velocity means high vorticity, which as usual in high Re flows, will spread around, before getting dissipated eventually. This is just the standard onset of turbulence, actually.

Another note: I changed my mind on the pressure thing. I think Chris was right that the gradient of the pressure (though not the pressure itself) will change sign, for Stokes flow. This is actually just because what you do when you time-reverse is change the flow, and the flow determines the pressure distribution, so you can calculate what will happen to the pressure. If you do that for Stoke's equation, its gradient must change sign, as the viscous term does. Examples:

  • In the Stokes flow around a sphere, if you reverse the flow, you clearly reverse the pressure gradient, as the drag is in the other direction.
  • The cylinder with the die is interesting. If you compute the viscous term, it turns out to be zero (due to symmetry, although not obvious really), so the pressure gradient is zero, so its sign doesn't matter. Actually, if you are more careful, you realize that it can't be zero, there should be a radial component to make the fluid go in a circle, but that radial component would be of the same order, and canceling the inertial term, we ignored.

However, in non-Stokes flow, like say in steady-state inviscid flow, for which Bernouille's theorem holds, the gradient of the pressure doesn't change, as the other terms in NS equation don't either! Example:

  • Venturi tube. Clearly, if you reverse the flow, the pressure is still high in the wide parts, and low in the constriction.

Kinetic theory

guillefix 4th April 2016 at 11:32pm

Kinetics of liquid-liquid unmixing

guillefix 10th February 2016 at 10:41pm

The mechanism by which phase separation occurs, depends on whether the concentration proportions fall within the spinodal, or outside it, i.e. whether they are the stable or metastable (see Thermodynamics of liquid-liquid unmixing).

When it is unstable, the phase separation proceeds inmediatelly and continuously, via a process known as spinodal decomposition.

When the mixture is in the metastable region of the phase diagram, then there is a free energy barrier to be overcome, which requires a large concentration fluctuation to form a nucleus, which can then grow. This is known as homogeneous nucleation. However, most often impurities trigger the growth before this happens, and this is known as heterogeneous nucleation.

Spinodal decomposition

When mixture is in the unstable region any small fluctuation in concentration will tend to be amplified, and this is known as spinodal decomposition.

This kind of "uphill diffusion" is because the fundamental quantity that tends to be equilibriated and thus diffuses to remove gradients is the chemical potential (how to derive this from a more macroscopic description, perhaps using Kinetic theory??). The chemical potential is related to the first derivative of the free energy. So if the second derivative is positive (as outside the spinodal region) the higher the concentration the higher the chemical potential, and diffusion acts to reduce concentration gradients. However, inside the spinodal region, the second derivative is negative, and the chemical potential decreases with concentration, and thus diffusion acts to increase concentration gradients.

If this was the only mechanism, sharp features will grow the fastest (just as they decay the fastest in normal diffusion). However, there must be something we have neglected. This is because, experimentally, it is found that interfaces have free energy, which isn't included in our free energy (See LectureNotes regarding surface tension).

[add fig. 3.7 here]

A phenomenologically motivated addition to the free energy to account for this is a term proportional to the square of the gradient in concentration with respect to position.

Then one can derive a modified diffusion equation based on:

  • a continuity equation
  • the current being proportional to the gradient of the exchange chemical potential (μAμB\mu_A-\mu_B), with proportionality constant called the Onsager coefficient. The way to understand the origin of this is to realize that what matters is what the derivative of the free energy with respect to the concentration is, to determine what must happen at equilibrium.
  • the fact that the chemical potential is the (functional) derivative of the free energy with respect to the concentration.

One then obtains a noninear equation, which when linearlized around a ϕ0\phi_0 gives the Cahn-Hilliard equation.


See more here: http://pruffle.mit.edu/~ccarter/3.21/Lecture_22/

kiwibirdgeno.jpg

Kleene star

guillefix 4th July 2016 at 11:17pm

A Kleene star, in Mathematical logic and Computer science, (or Kleene operator or Kleene closure) is a unary operation, either on sets of strings or on sets of symbols or characters.

If V is a set of symbols or characters then V* is the set of all strings over symbols in V, including the empty string ε.

It is often used in Coding theory, Formal language theory, etc.

Knowledge

guillefix 8th July 2016 at 2:18am

Knowledge management

guillefix 16th May 2016 at 9:08pm

Kolmogorov complexity

guillefix 15th July 2016 at 7:16pm

aka algorithmic complexity, although that term may refer to some generalizations of Kolmogorov complexity too, I think

One of the main kinds of Descriptional complexity, based on the minimum size of a program (interpreted by a Turing machine) that produces (describes) the object.

Kolmogorov complexity is central in Algorithmic information theory.

Math 574, Lesson 4-3: Kolmogorov Complexity other videos

This is based on describing the information content of a discrete object such as a binary string xx in terms of the length of the shortest program that generates xx on universal Turing machine (UTM). This measure is called the Kolmogorov-Chaitin complexity or simply Kolmogorov complexity K(x)K(x) of xx.

AIT differs fundamentally from Shannon information theory because the latter is fundamentally a theory about distributions, whereas the former is a theory about the information content of individual objects. Descriptional complexity also differs simplcitly with the notions of complexity in Complex systems.

Lecture notes on descriptional complexity and randomness

–> Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines See Coding theorem method

Deficiencies of KC

from here

Paucity theorems

Simple strings (paucity) are rare among all possible strings

The frequent paucity of trivial strings


A Computable Measure of Algorithmic Probability by Finite Approximations

See MMathPhys oral presentation

Kolmogorov_complexity_definition.png

guillefix 14th April 2016 at 10:34am

kolmogorov_universality.png

guillefix 14th April 2016 at 10:50am

Kolmogorov-Sinai entropy

guillefix 7th July 2016 at 8:58pm

also called metric or measure-theoretical entropy

Kolmogorov-Sinai entropy

See the related Topological entropy

For a Measure-theoretical dynamical system, the metric entropy of the system with respect to a partition α\alpha is defined to be the Entropy rate of the stochastic process resulting from the partition.

The metric entropy (aka (Kolmogorov–Sinai or measure-theoretical) is then the supremum of {the metric entropy with respect to α\alpha} over all finite partitions α\alpha.

Metric entropy provides the maximum average information per unit of time obtainable per unit of time, from the dynamical system.

See Amigo's book for details. He also gives a good example with the tent map.

Note his notation \vee refers to the join of two sigma-algebras. See here

Kolmogorov–Smirnov test

guillefix 15th July 2016 at 9:33pm

Kondo effect

guillefix 12th July 2016 at 3:43pm

An effect, observed in Dilute magnetic alloy, by which the resistance rises at low temperature.

It is named after the Japanese physicist Jun Kondo, who in 1964 published a calculation that indicated how the resistance minimum arose

Konstantin Tsiolkovsky

guillefix 25th June 2016 at 3:34am

Kramers rate theory

guillefix 26th March 2016 at 4:34am

Laplace method approximation of the mean first-passage time for a degree of freedom following a Fokker-Planck equation to overcome a barrier. Most easily solved in 1D, where the result is:

which is known as Kramers escape time. The exponential has the same form as the phenomenological Arrhenius equation, and the pre-factor is known as the inverse attempt frequency.

It is used to estimate reaction rates in Chemical kinetics where one defines a reaction coordinate approximating the evolution of the relevant molecules and their potential energy.

Barrier crossover time from probability distribution

Can use the (conditional) probability distribution P(x,tx0,t0)P(x,t|x_0,t_0) when a barrier is present to calculate the crossover time. This is done by considering the flux JJ (see Fokker-Planck equation). The probability distribution for crossing the barrier is then:

P(t)=J(Δx,tΔx,t0)0dtJ(Δx,tΔx,t0)P(t) = \frac{J(\Delta x, t| - \Delta x, t_0)}{\int_0^\infty dt J(\Delta x, t| - \Delta x, t_0)}

where we assume the barrier is at 00, so that Δx- \Delta x and Δx\Delta x are at opposite sides.

The probability distribution for crossing the barrier above, is the same as the probability distribution for the first passage time. To understand why we use the flux (current, JJ) to calculate this, imagine many instances of the Brownian particle in the potential. We can approximate the above P(t)P(t) by just considering frequencies, in the limit of infinite instances. Now, JJ is just calculated by counting the number of times a particle is found within dxdx of the point Δx\Delta x, at time tt, multiplied by its velocity at that moment.

Now, consider a first-passage path...

Well idea is that for every second passage passage path, there is a symmetric one with opposite velocity at measurement point (x,t), that thus cancels it in the sum:

Kramers_escape_time.png

guillefix 21st January 2016 at 3:44pm

Krohn–Rhodes theory

guillefix 9th March 2016 at 2:23am

Kuramoto model

guillefix 17th February 2016 at 8:32pm

See NonEq statmech notes.

Also:

http://www-sop.inria.fr/members/Olivier.Faugeras/MVA/ArticlesALire09/acebron-bonilla-etal-05.pdf

http://arxiv.org/pdf/1403.2083v2.pdf

https://en.wikipedia.org/wiki/Kuramoto_model

Things to note:

We transform in most manipulations (including in notes) to a frame that rotates with angular frequency equal to the mean angular frequency of oscillators, ω\langle \omega \rangle.

In this frame, the assumption is that the phase of the order parameter, ψ\psi is constant, and so can be chosen to 00.

The fact that it is constant is used to deduce that for the non-phase-locked oscillators (in case of partial coherence) their probability distribution must be constant, so that they are in a state of dynamic equilibrium (because their drift velocity vv can't be 00 as for the phase-locked states).

These differences in behaviour between phase-locked and non-phase-locked oscillators comes from solving their dynamical equations (eq. (9) in this paper, the behavior of which depends on the parameter ωi/(Krst)\omega_i/(Kr_{st}).

Langevin equation

guillefix 27th April 2016 at 1:36am

Langevin description of Brownian motion

Solving the Langevin equation

Non-inertial regime

with potential

Harmonic potential

Fokker-Planck equation

General case

Language

guillefix 5th July 2016 at 4:15am

A system used for Communication

Natural language

See also Formal language

Language and meaning

guillefix 21st January 2016 at 9:03pm

Laniakea Supercluster

guillefix 5th July 2016 at 3:27am

The Laniakea Supercluster (Laniakea; also called Local Supercluster or Local SCl) is the Galaxy supercluster that is home to the Milky Way and 100,000 other nearby galaxies.

Laplace method

guillefix 28th April 2016 at 2:17am

For integrals of the form:

I(x)=abf(t)exϕ(t)dtI(x) = \int_a^b f(t) e^{x\phi(t)} dt   as xx\rightarrow \infty

Contributions near global maxima of ϕ(t)\phi(t).

Watson lemma

Special case, for ϕ(t)=t\phi(t) = t

Laplace method

1. Restrict integral to a small region (of order ϵ\epsilon) around maxima of exponential function ϕ\phi, and confirm we are making an exponentially small error.
2. Expand f(t)f(t) and ϕ(t)\phi(t) in series valid in this region, so we get a series of integrals.
3. It is then usually easier to evaluate these integrals by extending the limits to infinity (after rescaling), confirming that we are again making an exponentially small error.
4. Confirm assumptions are self-consistent.

Genera Laplace integral

Three cases:

Case 1 The maximum is at t=at=a

ϕ(a)0\phi'(a) \leq 0 (since it is maximum), and we assume it is not 00, so ϕ(a)<0\phi'(a) < 0

I(x)f(a)exϕ(a)xϕ(a)I(x) \sim -\frac{f(a)e^{x\phi(a)}}{x\phi'(a)}

Case 2 The maximum is at t=bt=b

I(x)f(b)exϕ(b)xϕ(b)I(x) \sim \frac{f(b)e^{x\phi(b)}}{x\phi'(b)}

Case 3 The maximum is at some t=ct=c with a<c<ba<c<b.

I(x)2πf(c)exϕ(c)xϕ(c)I(x) \sim \frac{\sqrt{2\pi}f(c)e^{x\phi(c)}}{\sqrt{-x\phi''(c)}}

Large-scale structure of networks

guillefix 16th February 2016 at 12:14am

Components

Networks often have the largest connected component covering most of the network (often more than 50% or 90%). This is sometimes called the "giant component" (however this is sloppy usage, as the term "giant component" means not precisely the same as "largest component" in network theory).

In directed networks, we can represent the largest strongly connected component, and its in and out components using a "bow tie" diagram

Shortest paths and the small-world effect

The small-world effect refers to the finding that the typical distance between nodes in many –perhaps most– networks is surprisingly small. The "typical distance" usually refers to the "mean geodesic distance". Networks that show this property are dubbed small-world networks.

The origin of the term comes from a series of experiments by social psychiatrist Stanley Milgram, the so called "small-world" studies, in the 60s.

Models of networks often show that this distance scales as logn\log{n}, where nn is the size of the network. This is often given as an upper limit for the growth of the distance with nn, so that the network is said to have the small-world proprety.

The diameter (the largest geodesic distance) is also found to scale similarly. For scale free networks, however, an interesting structure is often found with a core that contains most nodes, and is of lengthscale loglogn\log{\log{n}}, making the mean distance scale like that too, but there are a few nodes along "streamers" or "tendrils", around the core, whose lengthscale scales logn\log{n}, making the diameter scale like that too.

Another interesting effect that is observed, termed funneling, is that often it is found that the geodesic path (path(s) with shortest length) between vertex jj and ii passes through only a few particularly well-connected neighbours of ii for most choices of starting point jj. Thus if one follows shortest paths to try to reach ii, one is likely to be "funnelled" through those few or one particular neighbours of ii.

Degree distributions

The degree distribution pkp_k is the fraction of nodes in the network that have degree kk.

The same information can be given in a degree sequence, that is a sequence of the degrees of all the nodes in the network. One can easily see from simple examples, that this information doesn't, however, specify the network structure, in general.

For directed networks, we can define the joint in- and out- degree distribution pjkp_{jk}, the probability that a vertex has in-degree jj and out-degree kk. This has been currently rarely been measured in practice though.

Power laws and scale-free networks

Often (though definitely not always), real networks show a power law degree distribution:

pk=Ckαp_k=Ck^{-\alpha}

where α\alpha is the exponent. Values 2<α<32<\alpha<3 are typical. These are examples of right-skewed distributions. Typically, the power law is only obeyed for the tail of the distribution, but not for small values of kk. And typically it is also not obeyed in the high end, for example, due to some cut-off.

Networks with power-law degree distributions are sometimes called scale-free networks.

Distributions of other centrality measures

Distributions of the values for nodes for others centrality measures defined in Measures and metrics for networks.

Centralization

We can use the distribution of centrality measures to answer the question: "how are the centrality values spread?". High spreads indicate a good centrality measure (or very high centralization, I think), while low spreads indicate a low centrality measure (or descentralization, I think.).

A measure for it is:

C=i=1N[Cb(i)Cb(i)]i=1N[C~b(i)C~b(i)]\mathcal{C} = \frac{\sum_{i=1}^N [C_b(i^*)-C_b(i)]}{\sum_{i=1}^N [\tilde{C}_b(i^*)-\tilde{C}_b(i)]}

where in the denominator, C~b(i)\tilde{C}_b(i) is the betweeness centrality of node ii, and ii^* is a node that maximizes it, both for the graph that maximizes C~b(i)\tilde{C}_b(i^*) (a star graph for betweeness for example). The Cb(i)C_b (i) without the tilde is for the actual graph.

Dynamical importance (& eigenvalue elasticity)

  • measures changes in eigenvalues of AA due to some perturbations
  • suppose GG strongly connected.

Edge dynamical importance of (i,j)(i,j) is:

I(i,j)=Δλijλ1I(i,j)=-\frac{\Delta \lambda_{ij}}{\lambda_1}

where λ1\lambda_1 is the largest eigenvalue of A, and Δλij\Delta \lambda_{ij} is the change in λ1\lambda_1 from removing edge from jj to ii (i.e. removing AijA_{ij}).

The Node dynamical importance of (i)(i) is:

I(i)=Δλiλ1I(i)=-\frac{\Delta \lambda_{i}}{\lambda_1}

where Δλi\Delta \lambda_{i} is the change in λ1\lambda_1 from removing node ii (i.e. removing column and row ii).

One can show that:

I(i)viuivTuI(i)\approx \frac{v_i u_i}{v^T u}

I(i,j)Aijviujλ1vTuI(i,j)\approx \frac{A_{ij}v_i u_j}{\lambda_1 v^T u}

where the approximation is in only considering the changes in eigenvalue and eigenvector to 1st order. See problem sheet 4 answers for proof.

Structural things related (by spectrum, often) to dynamics Dynamics of removing nodes and edges he means?

Clustering coefficients

(see Transitivity (Graph theory))

Clustering coefficients, CC are often found to be larger than one would expect if edges where randomly chosen (for a fixed degree distribution, for example, see formula 8.24 in Newman's).

This is often the case for social networks. One mechanism that can lead to this is triadic closure (when an open triad is close, say because the common vertex introduces the other two, in case of social nets). This has indeed been found to happen in cases when time-resolved data for network formation is studied.

In, the Internet networks, however, CC is much smaller than the predicted value given by chance (eq 8.24 Newman), suggesting there are forces that shy away from creating triangles. However, different models to compare with (i.e. other random graph models), and other ways of measuring clustering coefficients give different results.

Other motifs apart from triangles are also measured sometimes and show interesting patterns (like in neural networks).

Local clustering coefficients often show an interesting anti-correlation with degree in real networks. An explanation to this phenomena can be given if the network has a community structure with nodes grouped together in groups of varying sizes. A hierarchical structure has also been proposed to explain this.

Assortative mixing

Assortative mixing is the tendency of nodes to connect to others that are like them in some way. The formula given in there is not very efficient to compute, because of the double sum going like n2n^2. There is however a more efficient one that goes like mm, the number of edges, which is often scales slower with nn (see eq. 8.27 in Newmann book).

Empirically, it is found that most social networks have positive assortativity while most others (technological, biological) have negative assortativity.

Part of the explanation for this seems to be that most networks are naturally dissasortative by degree because they are simple graphs (see Mathematics of networks) and so the number of connections between high degree nodes is limited, and so if there aren't many high degree nodes, they will have to connect mostly to lower degree nodes (I think this is gist of explanation).

Social networks, on the other hand, seem to overcome this due to their group structure (high clustering coefficient) so that in small groups people of low degree will be mostly connected to people with low degree (i.e. within the small group), and the larger groups will contribute to making people of high degree being mostly connected to people of high degree (i.e. within the large group).

Latex

guillefix 11th May 2016 at 12:39pm

Latex is a colloidal dispersion of polymer particles in a liquid.

Lattice (algebraic structure)

guillefix 14th July 2016 at 1:36am

A lattice is an Algebraic structure defined as:

a poset LL in which every pair of elements posseses a join and a meet

A unit element in a lattice LL is an element 11 such that, for all aLa \in L, a1a \preceq 1. A null element in a lattice LL is an element 00 such that, for all aLa \in L, 0a0 \preceq a.

The lattice is complete if a Greatest lower bound and a Least upper bound exist for every subset SS of LL (all that is guaranteed by the definition of a lattice is that these bounds will exist for all finite subsets of L). If these exist, they are denoted as S\wedge S, and S\vee S, respectively.

Lattice of subsets

guillefix 14th July 2016 at 2:25am

A lattice:

Lattice-like algebraic structures

guillefix 14th July 2016 at 1:10am

Law

guillefix 17th May 2016 at 12:58am

Layers for deep learning

guillefix 9th July 2016 at 4:18am

Linear layer. Linear function

ReLU layer. Rectified linear unit

For x=0, may use subderivatives..

Very popular

maxout unit

Learning and communication

guillefix 8th April 2016 at 4:32pm

Learning <==> Education

Education - Springer

Learning theory

guillefix 26th July 2016 at 3:27am

See Machine learning

Mathematical theory of learning.

Learning problem: Design a system that improves on its ability to perform task T, as measured by performance measure P, by going through experience E.

Empirical risk minimization

Minimize a cost function, which often is the negative log likelihood (similar to entropy. More precisely, cross-entropy, or relative entropy), which corresponds to maximizing likelihood. Likelihood is the probability of getting the right yy given xx and θ\theta, i.e. the probability that a given model predicts the right outputs. This is equivalent to finding the most likely θ\theta in the Bayesian posterior, given a flat prior (but if we add a regularizer, we can tweak the prior, by just adding a term to the log likelihood). If our model uses a Gaussian distribution to predict the data (where the θ\thetas are the means), maximizing likelihood is equivalent to minimizing spring energy for springs vertically placed between fit curve and data.

The maximum likelihood is found by Optimization, often by Stochastic gradient descent.

If we want the whole distribution of likelihoods over θ\thetas, we need to use Bayesian statistics, which involves doing complicated integrals, often done numerically using Montecarlo methods


file:///home/guillefix/Dropbox/Oxford/Systems%20Biology%20DPhil/Research/schoelkopf.pdf


Adaptive resonance theory The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge.

Least upper bound

guillefix 14th July 2016 at 1:30am

Natural extension of the join of two elements to an arbitrary Set of elements of a poset

Interpreting the Partial ordering as "less than or equal", it can be understood as the least point that is greater than or equal to all the points in the set.

Lempel-Ziv algorithms

guillefix 9th July 2016 at 4:50am

Lempel-Ziv complexity

guillefix 21st July 2016 at 3:30pm

Life sciences

guillefix 8th July 2016 at 1:34am

https://en.wikipedia.org/wiki/List_of_life_sciences

"The life sciences comprise the fields of science that involve the scientific study of living organisms – such as microorganisms, plants, animals, and human beings – as well as related considerations like bioethics. While biology remains the centerpiece of the life sciences, technological advances in molecular biology and biotechnology have led to a burgeoning of specializations and interdisciplinary fields."

Life sciences - Springer

Life sciences - Elsevier, SicenceDirect


Cochrane review

Alan Hastings - Population biology

Ben Goldacre-Bad Pharma_ How Drug Companies Mislead Doctors and Harm Patients-Faber & Faber (2012)

Five-seconds rule paper

Forced movements, tropisms, and animal conduct

Hyaloid canal

Fruit and vegetable consumption and all-cause, cancer and CVD mortality: analysis of Health Survey for England data

Singh S., Ernst E. Trick or treatment alternative medicine on trial 2008

Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits

The mechanistic conception of life - biological essays - Loeb, Jacques, 1859-1924

http://www.trickortreatment.com/

Why dont animals have wheels - Dawkins

Lighting

guillefix 1st July 2016 at 11:24pm

Lighting or illumination is the deliberate use of light to achieve a practical or aesthetic effect.

https://en.wikipedia.org/wiki/Lighting

Likelihood function

guillefix 25th June 2016 at 3:14pm

Likelihood function, L\mathcal{L} is defined as

L=P(datatheory)\mathcal{L} = \text{P}(\text{data}|\text{theory})

I.e. the probability of the data given the theory.

One often considers the log-likelihood, which is just the log of the likelihood.

See also Fisher information matrix

Limits and infinity

guillefix 17th June 2016 at 9:49pm

A supertask refers to an infinite amount of actions in a finite amount of time.

This is analogous to other "super"-things, like supersolids which are solids with a finite volume, but an infinite surface area (like Gabriel's horn, or other ones that don't have to be unbounded in linear size).

How To Count Past Infinity


Another mathematical object defined as a limit are space-filling curves, described in this video by 3Blue1Brown, that also explains the usefulness of infinite results in a finite world. Basically, infinite results are always described as a limit of a sequence of finite results. And these finite results themselves are useful. The concept of infinity is still useful because it allows to understand and summarize these finite results in simple ways.

Linear algebra

guillefix 30th June 2016 at 4:45pm

Linear filter

guillefix 1st July 2016 at 5:07pm

Linear regression

guillefix 9th July 2016 at 3:57am

See Regression analysis.

Least mean squares

Use Matrix calculus for optimization: leads to normal equations (analytical solution to least squares), etc.

See here

Linguistics

guillefix 8th April 2016 at 9:00pm

Linux

guillefix 24th June 2016 at 11:28pm

Liquid crystals

guillefix 11th May 2016 at 1:28pm

Liquid crystals correspond to matter in non-isotropic phases, like the nematic, or smectic phases, but which don't have full crystalline order.

See more in Principles of condensed matter physics book, and de Gennes and Prost, "The physics of liquid crystals". Liquid crystal theory

Landau-de Gennes bulk free energy density

I think derived following Landau's theory of phase transition, when given an order parameter: including terms that satisfy certain symmetries.

Fbulk=AQijQji/2+BQijQjkQki/3+C(QijQji)2/4F_{\text{bulk}} = A Q_{ij} Q_{ji}/2 + B Q_{ij} Q_{jk} Q_{ki}/3 + C (Q_{ij} Q_{ji})^2/4

where the order parameter, for uniaxial LCs, is:

Qij=3q2ninjδij/3Q_{ij} = \frac{3q}{2} \langle n_i n_j - \delta_{ij} /3 \rangle

where qq is a scalar indicating the level of ordering (i.e. the variance of individual molecule's direction from the director field n\mathbf{n}). n\mathbf{n} is the director (the direction in which the molecules point in average at a given point, where the direction is considered as a ray, i.e. n\mathbf{n} and n-\mathbf{n} are physically equivalent.

Generalized elasticity of liquid crystals

The free energy of distortion (per unit volume) of a liquid crystal has the form:

Fd=12K1(n)2+12K2(n×n)2+12K3(n×(×n)2F_d=\frac{1}{2}K_1(\nabla \cdot \mathbf{n})^2+\frac{1}{2}K_2(\mathbf{n}\cdot\nabla \times \mathbf{n})^2+\frac{1}{2}K_3(\mathbf{n}\times (\nabla \times\mathbf{n})^2

where K1K_1, K2K_2, and K3K_3 are the elastic constants corresponding to the three types of elastic deformation that alter the long-range order in liquid crystals (and are thus opposed by elastic forces):

  • splay
  • twist
  • bend

Friederik transition


See Complex fluid dynamics for the dynamics of liquid crystals


People

P.G. de Gennes (see his book on Physics of liquid crystals)

liquid-liquid_unmixing.png

guillefix 9th February 2016 at 8:29pm

liquid-liquid_unmixing2.png

guillefix 9th February 2016 at 8:38pm

liquid-liquid_unmixing3.png

guillefix 9th February 2016 at 8:50pm

Liquid-phase sintering

guillefix 3rd July 2016 at 6:09pm

Local Group

guillefix 5th July 2016 at 3:29am

The Local Group is the Galaxy group that includes the Milky Way.

Locomotion

guillefix 31st May 2016 at 12:22am

log-det identity

guillefix 5th July 2016 at 11:42pm

lndetM=tr lnM\ln{\det{\mathbf{M}}} = \text{tr }{\ln{\mathbf{M}}}

Logic

guillefix 29th March 2016 at 3:08pm

Logistic regression

guillefix 9th July 2016 at 4:05am

lol

guillefix 11th June 2016 at 8:23pm

Long short-term memory

guillefix 9th July 2016 at 4:20am

Long-range interacting systems

guillefix 6th May 2016 at 12:31pm

Statistical Mechanics of Systems with Long-Range Interactions (on first lect of this part) book

Book: Physics of Long-Range Interacting Systems

Often take them to have two-body interaction of the form:

U(r)1rd+σU(r) \sim \frac{1}{r^{d+\sigma}}

where dd is the dimension.

For σ0\sigma \leq 0, the systems are non-additive (or non-extensive), in the sense of Statistical physics, so that, for example, energy is no simply proportional to volume.

Examples

  • self-gravitating system (σ=2\sigma = 2)
  • dipolar magnets (σ=0 \sigma =0)
  • charged plasma
  • vortices in 2 dimensions. U(r)lnrU(r) \sim \ln{r}, σ=2\sigma = -2 (effectively)

Lorenz curves for power law distributions

guillefix 23rd June 2016 at 11:21pm

See Power laws

Another interesting quantity (here applied to networks, though applied to wealth distribution and elsewhere of course) is the fraction of ends of edges WW that connect to the fraction PP of nodes when ordered by their degree (i.e. the top P100P*100 percent of nodes, by degree). It can be shown that for scale free networks:

W=P(α2)/(α1)W=P^{(\alpha-2)/(\alpha-1)}

The curves WW vs. PP are called Lorenz curves, after Max Lorenz. For example, for the World Wide Web links, α2.2\alpha \approx 2.2 and the curve shows that 50% of links go to the top 2% "richest" pages ("richer" meaning with higher number of links). Actually, as the WWW doesn't follow a perfect power law, the real number is closer to 1.1%

This is related to Gini coefficients. More on power laws

As a comparison, one can calculate the Lorenz curve for a exponential distribution, for example. Both WW and PP go like exe^{-x} for large xx (i.e. small PP or WW). Therefore the Lorenz curve (at its extreme) goes like PWP \sim W, and so the top 1%1\% have just 1%1\% of the wealth.

W=xexdx=[xex]x+exdxW = \int x e^{-x} dx = [x e^{-x}]^{x}_\infty+\int e^{-x} dx

=(x+1)ex =(x +1 )e^{-x}

P=exP = e^{-x}

W=PPlnP\therefore W = P-P\ln{P}

The typical plot however plots the income of the bottom 100(1P)100(1-P) %, i.e. 1W1-W, vs that percent from the bottom, i.e. 100(1P)100(1-P) %. Here is the resulting plot in WolframAlpha. This shows that indeed inequality is not exclusive at all to power law distributions. In fact the only distribution with a perfectly equal Lorenz curve, corresponds to when everyone has the same, so the distribution is a Dirac delta centered on a certain point.

However, power law distributions often do show more inequality than exponential distributions. For instance, in power laws a typical situation is the famous "80-20 rule", by which the top 20% have 80% of the income. For exponential distribution, it can be seen from the plot that the top 20% has "only" 65% of the income. try different exponential distribution, do I get different Lorenz curve, I think I would! So this statement was not very meaningful..

What preferential attachment (and its resulting power law distributions) does is not make extreme events possible (they are possible in other networks), but it makes them more likely (power law decays less rapidly). In the preferential attachment model, this is because extremes are amplified due to the nature of the model.

Low Reynolds number

guillefix 3rd June 2016 at 3:09am

See Active matter, Microhydrodynamics, and Kinematic reversibility in fluid dynamics for more

Zero reynolds number doesnt mean no acceleration. It just means that no force is needed to cause that acceleration

In the zero Re limit, if the swimmer accelerates (say by varying the velociy of the corkscrew), and if it has a finite mass, the fluid will exert a net force on the swimmer, and thus the swimmer will exert a net force on the fluid, momentarily creating a Stokelet component. If we somehow had a small but very heavy swimmer with a large thrust too, it would then create a Stokelet velocity field for a significant period of time.

Life at low Reynold's numbers

Happel and Brenner book: Low Reynolds number hydrodynamics (book)

Reciprocal theorem

The reciprocal theorem allows one to determine results for one Stokes-flow field based upon the solution of another Stokes flow in the same geometry, i.e. having the same boundaries but different boundary conditions.

See A physical introduction to suspension dynamics.

Machine code

guillefix 30th June 2016 at 1:04am

Often programmed via Assembly (programming language)

Machine learning

guillefix 12th July 2016 at 12:34am

See Artificial and machine intelligence and Artificial intelligence, Deep learning

Book recommendations

Building Machine Learning Systems with PythonMachine learning in MatlabLecture list of Andrew's course:lecture notesAndrew Ng machine learning course https://www.youtube.com/watch?v=UzxYlbK2c7E . On lecture 2Machine Learning - mathematicalmonkMachine Learning: A Probabilistic Perspective and hereMachine Learning: Discriminative and Generative (The Springer International Series in Engineering and Computer Science) https://en.wikipedia.org/wiki/Generative_model

Supervised learning

Training data consisting on inputs and outputs. Want to find function relating inputs to outputs, to then be able to predict new outputs from new inputs. Need a way to represent the function approximation, with some parameters (the model):

and a learning algorithm to find best parameters for the data.

Two main types:

  • Regression. Output value is continuous
  • Classification. Output value is discrete

New paradigm: Deep learning

Unsupervised learning

Intro by Andrew Ng

Self-organizing map

Cocktail party problem. Independent component analysis

K-means

Clustering

Community clustering in networks

Variations on supervised and unsupervised

Variations on supervised and unsupervised

Semi-supervised learning

You are given a set of inputs xx, but you only have the corresponding outputs yy for some. You have to predict the yy for the rest (by learning the function y(x)y(x) for instance, like in Supervised learning.

Active learning

Like semi-supervised learning but the algorithm can ask for extra data, which it deems to be the most useful data to ask for.

Decision-theoretic learning

Basically loss-functions/costs used by the learning agent are based on Decision theory. See example here.

Reinforcement learning

To me it seems like the difference with supervised learning, is that you don't specify input, output pairs, but just outputs. You specify desired outputs, and undesired outputs. There is no input, but still the problem is not just trivial (i.e. it only ever produces one output), because the model is probabilistic.

Sequence of decisions

Reward function

Used often in robotics.

Learning theory and Learning algorithms

Deep learning

Go deep into the rabbit hole

Bayesian inferential statistics

Graphical models

Good framework: Stan


Deep Learning Lecture 5: Regularization, model complexity and data complexity (part 2)

So the simplest model that works seems to work best most of the time. Seems like an example of Occam's razor, and thus related to Solomonoff's ideas on inference (see Algorithmic information theory). Epicurus principle also related to Bayesian inference, because we give a distribution over models, but we keep all of them.

Hmm, also your error can't be smaller than the fundamental noise in the data. Well it can, but your model will at best be wasteful then.


Try Torch:

See https://www.youtube.com/watch?v=DHspIG64CVM#t=45m40s

Machine learning in science and engineering

guillefix 3rd July 2016 at 4:54am

Machine_learning_uses1.png

guillefix 4th February 2016 at 6:30pm

Machine_learning_uses2.png

guillefix 4th February 2016 at 6:32pm

Macroeconomics

guillefix 7th May 2016 at 5:59pm

Magnetohydrodynamics

guillefix 7th May 2016 at 6:15pm

Oxford notes

Magnetohydrodynamics (a.k.a. MHD).

GdR Dynamo 2015 (nice lecture series on MHD and related topics)

See also other lecture courses in MMathPhys


Note:

Flux freezing does not imply a one-to-one correspondence between the magnetic field strength B\mathbf{B} and the displacement field of the fluid δr\delta \mathbf{r}, because the relation includes ρ\rho:

δrBρ\delta \mathbf{r} \propto \frac{\mathbf{B}}{\rho} ()\quad\quad(\dagger)

In waves in MHD, ρ\rho also changes, and therefore its effect is important. In particular note the MHD linear wave equation for B\mathbf{B}:

The second equation means that if the fluid gets compressed in the direction perpendicular to the magnetic field, the magnetic field increases in magnitude. This has to be the case because:

  • If ρ\rho doesn't change, the compression in the perpendicular direction will imply an elongation in parallel direction, and thus δr\delta \mathbf{r} will elongate, and because ()(\dagger) implies that it is proportional to B\mathbf{B} for constant ρ\rho, then B\mathbf{B} will also increase.
  • Conversely, if δr\delta \mathbf{r} doesn't elongate, then ρ\rho must increase. But now the LHS of ()(\dagger) hasn't changed, and therefore the RHS shouldn't too, so as ρ\rho has increased, this implies that B\mathbf{B} will also increase.

Manufacturing

guillefix 7th May 2016 at 3:36am

Production of goods, by processing of raw materials.

When manufacturing is done in the context of an economy, it's called Industry.

Manufacturing innovation

guillefix 6th May 2016 at 11:49pm

Marine life

guillefix 7th May 2016 at 1:14am

Markov chain

guillefix 4th July 2016 at 6:52pm

Markov process with a discrete state space. Can have:

Definition of Markov Chain

Ergodic theorem for Markov chains

Has applications in theory of Stochastic processes, and in Machine learning. In particular through a Hidden Markov model

See also Finite state channel

Order of a Markov chain. See here

Markov subchains A subchain of a Markov chain is also a Markov chain

Regular Markov chain here here

See book Markov chains by Norris

Markov chain compression

guillefix 28th June 2016 at 4:33am

Markov input process

guillefix 1st July 2016 at 5:42pm

A kind of Information source, produced by a Markov process

Markov process

guillefix 28th June 2016 at 4:31am

Martingale

guillefix 1st July 2016 at 5:25am

Master equation

guillefix 27th April 2016 at 1:10am

For discrete space Stochastic processes

Discrete time master equation

For discrete time, probability to be in state nn at time t+Δtt+\Delta t is:

P(n,t+Δt)=nW(nn)P(n,t)P(n, t+\Delta t) = \sum_{n'} W(n'\rightarrow n)P(n', t)

where the W(nn)W(n'\rightarrow n) are the transition probabilities (which can be expressed as a transition matrix).

Continuous time master equation

For continuous time, we can subtract P(n,t)P(n,t) from both sides of the discrete time equation, and divide by Δt0\Delta t \rightarrow 0. Then

dP(n,t)dt=nw(nn)P(n,t)[nw(nn)]P(n,t)\frac{d P(n, t)}{dt} = \sum_{n'} w(n' \rightarrow n) P(n', t) - [\sum_{n'} w(n \rightarrow n')]P(n,t)

=nnw(nn)P(n,t)[nnw(nn)]P(n,t)= \sum_{n' \neq n} w(n' \rightarrow n) P(n', t) - [\sum_{n' \neq n} w(n \rightarrow n')]P(n,t)

where w(nn)=limΔt0W(nn)Δtw(n' \rightarrow n) = \lim_{\Delta t \rightarrow 0} \frac{W(n'\rightarrow n)}{\Delta t}, and where for the bracketed part we used that probability is conserved (i.e. the particle has to go somewhere), nw(nn)=1\sum_{n'} w(n \rightarrow n') = 1, and in the second line we cancelled the n=nn' = n terms from both terms.

Solve using Fourier series, as if it is in a (discrete) lattice. For more general networks, Fouriers may not be appropriate.. You can then use eigenvector methods

Matched asymptotic expansions

guillefix 7th June 2016 at 5:00pm

Perturbation method to get approximate solutions to singular perturbation problems of differential equations, often when the small parameter ϵ\epsilon is multiplying the highest derivative. Then the ϵ=0\epsilon=0 problem is of lower order, and will in general not be able to satisfy all the boundary conditions of the original problem

If y is the solution to Dϵy=0D_\epsilon y = 0 then one possible behaviour in such cases is that:

  • over most of the range ϵdky/dxk\epsilon d^k y/dx^k is small, and yy approximately obeys D0y=0D_0 y = 0.
  • in certain regions, often near the ends of the range, ϵdky/dxk\epsilon d^ky/dx^k is not small, and yy adjusts itself to the boundary conditions (i.e. dky/dxkd^ky/dx^k large in some places).

In fluid dynamics these regions are known as boundary layers, in solid mechanics they are known as edge layers, in electrodymanics they are known as skin layers, etc.

For this reason the subject of matched asymptotic expansions is sometimes called boundary-layer theory.

Boundary layers can also appear in other circumstances, for instance when the perturbation converts a linear DE into a nonlinear one. See the example in question 2 here (in that case, the linear problem has a solution with a singularity at x=0x=0, but the nonlinearity makes f(0)ϵ1/2f(0) \sim \epsilon^{-1/2}. See solution in black oxford notebook.

Handout from lecture

Method of matched asymptotic expansions

1. Determine the scaling of the boundary layers (e.g. xϵx \propto \epsilon or ϵ1/2\epsilon^{1/2} or ...). Note, it may be appropriate to rescale the dependent variable too! See example in question 2 here.
2. Reescale the independent variable in the boundary layer (e.g. x=x^ϵx=\hat{x}\epsilon, or x^ϵ1/2\hat{x}\epsilon^{1/2} or ...)
3. Find the asymptotic expansions of the solutions in the boundary layers and outside the boundary layers (the "inner" and "outer" solutions)
4. Fix the arbitrary constants in these solutions by
(a) obeying the boundary conditions (often for inner solutions)
(b) matching – making the inner and outer solutions join up properly in the transition region between them.

Trick for finding scaling in boundary layer: in the boundary layer d2y/dx2d^2y/dx^2 is often significant (though not always!). We must increase α\alpha (where x=x^ϵαx=\hat{x}\epsilon^\alpha) until this term balances the largest of the others in the equation.

Matching of asymptotic expansions

Prandtl matching rule

Most elementary.

You simply require

limxBLyBL(xBL)=limxM0yM(xM)\lim_{x_{BL} \rightarrow \infty}y_{BL}(x_{BL}) = \lim_{x_M\rightarrow 0} y_M(x_M)

where BLBL and MM refer to boundary layer, and middle (outer) solutions and variables.

van Dyke matching rule

Van Dyke's matching 'rule' usually works (more powerful than Prandtl's) and is much more convenient than the Intermediate variable matching below. The rule is

(m term inner)(n term outer)=(n term outer)(m term inner)(m\text{ term inner})(n\text{ term outer}) = (n\text{ term outer})(m\text{ term inner})

oooooooooooooooooooooooooooooooo

I.e. in outer expansion, in the outer variables expand to nn terms; then switch to inner variables and reexpand to mm terms.The result is the same as first expanding the inner expansion in the inner variables to mm terms, then switching to outer variables and reexpanding to nn terms.. Hmm, but these expansions are expressed in different variables. I guess, as a last implicit step, I should convert to the same variables to compare. What's the justification of this rule?

When using this matching rule you must treat log as O(1)O(1) because of the size of logarithmic terms.

Intermediate variable matching

Most advanced and powerful of the methods. More tedious to apply too.

Expansions for two "contiguous" regions should actually have an overlap or transition region where both expansions are valid.

For example, suppose there is a boundary layer for x=ϵx^x=\epsilon \hat{x} for x^=O(1)\hat{x} = O(1), but the expansion we find is actually valid for x^=o(ϵ1)\hat{x} = o(\epsilon^{-1}), i.e., the expansion breaks when x^\hat{x} becomes ord(ϵ1)ord(\epsilon^{-1}) or larger. Suppose also, that the middle, or outer region is defined for x=ord(1)x=ord(1), but the expansion is valid for x=ord(ϵα)x=ord(\epsilon^\alpha), for 0<α<10<\alpha <1. Then in any region with x=ord(ϵα)x=ord(\epsilon^\alpha), for 0<α<10<\alpha <1, both expansions are valid, and therefore should match due to the uniqueness of Asymptotic approximations.

Note some terms jump order: a term in the examples in the notes comes from the inner expansion of the first-outer term, but it also comes from the outer expansion of the second-inner term. "First-outer" refers to first order in the expansion in outer region. The terms "inner and outer expansion" here refer to the expansion in terms of rescaled variables, but these terms are most often used for the van-Dyke rule where the "outer expansion" refers to the expansion of some term in terms of the outer variable, and similarly for the "inner expansion". The nomenclature he uses, is a bit confusing though.

Composite expansion

A composite expansion is an expansion that is valid across the whole domain. It built as yBL+YMoverlapsy_{BL} + Y_M - \text{overlaps}, where yBLy_{BL} are the solutions in the boundary layers, YMY_M are the outer solutions, outside boundary layers, and overlaps\text{overlaps} are overlaps, removed to avoid double counting. The overlaps\text{overlaps} term removes the contribution from the inner expansion, when looking at the outer region(s); or the contribution from the outer expansion, when looking at the inner region(s). In practice, this can be done by substracting a term of the form (m term inner)(n term outer)(m\text{ term inner})(n\text{ term outer}) at the right order.

It is not unique, because it is not in standard Poincare form

Boundary and transition layers

Boundary layers

Think of the ϵ=0\epsilon = 0 problem, and that to have the possibility of a non-trivial boundary layer we need some solution in the inner region which decays as we move towards the outer. In the problem considered in the notes, for example, the non-constant solution in the right-hand "boundary layer" grew exponentially as we moved to the outer, so there could never be a boundary layer at x=1x = 1.

Transition or interior layers

Regions of fast change, not in the boundary, but in the interior of the domain. Finding the position of an interior layer can sometimes be hard.

Non-linear boundary layers

Boundary layer at infinity

Revise these


Example: van der Pol oscillator

Revise this example pages 36-41

Material

guillefix 1st July 2016 at 11:18pm

A particular kind of Bulk matter

See Materials science

Materials science

guillefix 21st July 2016 at 12:54am

Materials science, also commonly known as materials science and engineering, is the Science and Engineering of material properties, their design, and uses.

A material, I think, most often refers to a type of Bulk matter (identified by its composition in terms of phases and chemical composition). However it may sometimes refer to chemical substances per se, or other non-bulk, but relatively simple, arrangements of matter, as for example, in Nanotechnology. It may even be used for more complex arrangements of matter so that a bulk description is not totally appropriate, such as in "smart materials", where ideas from Complex systems may be necessary for their description.

Materials science needs to describe the specific properties of each material. Constitutive equations play a fundamental role in the theory of these properties.

The physics of materials is based on Condensed matter physics. If dealing with fluids, it of course uses Fluid mechanics too.

Best materials course ever (mostly metals)


See classification of materials in Condensed matter physics.

Some important materials: Polymers, Metals, Ceramics, Composite material.

See also Soft materials, Chemistry, Surface science

Some material properties:

See for a good resource on materials properties


Variational Methods for Microstructural Evolution

Some IUPAC definition recomendations:

Definitions of terms relating to the structure and processing of sols, gels, networks, and inorganic-organic hybrid materials (IUPAC Recommendations 2007)

Terminology of polymers and polymerization processes in dispersed systems (IUPAC Recommendations 2011)*

Math JS libraries

guillefix 27th June 2016 at 10:45pm

Mathematical biology

guillefix 8th May 2016 at 2:00pm

NIMBioS channel

http://quant.bio/

Has applications to Systems biology for instance.

Mathematical logic

guillefix 29th May 2016 at 12:31am

Things can make sense

Mathematical logic is an essential part (if not the essential part) of the foundations of mathematics

See Discrete mathematics, Theoretical computer science, Logic..

Video lectures:

Mathematics - Mathematical Logic

NPTEL Computer Sc - Discrete Mathematical Structures

Mathematical markup language

guillefix 20th June 2016 at 5:34pm

LaTeX to OpenMath

https://en.wikipedia.org/wiki/OpenMath A critique of OpenMath

While I applaud the occasional successes in these ventures, the result have been unimpressive even from the range of computations routinely performed by computer algebra systems. They certainly represent a small scope compared to the kinds of mathematics human researchers deal with informally on computers. (Consider all the advanced mathematics routinely typeset by use of the program TEX.) My view is that much of today’s applicable mathematics, including that in ordinary texts and journals, is simply too informal tobe handled by the logical and algebraic means typically proposed by the con-structivists. Indeed, much of mathematical discourse goes beyond informality to be (unintentionally) ambiguous on its face. The ambiguity can generally be resolved by a sufficiently contextual interpretation, often requiring a reader to be skilled in the mathematical subdiscipline – not merely the notation – being represented.

Almost any ambitious computer algebra system that must eventually meet performance ex-pectations seems to abandon proofs or (complete) formal rigor

One person’s syntax is another person’s semantics

AugMath should be able to represent informal mathematics, by basing its philosophy in the notation, just like LaTeX itself, rather than in the semantics. Semantics can be added later as a layer...

Get functions mathML from here: http://functions.wolfram.com/Bessel-TypeFunctions/BesselI/11/0001/

Mathematical methods

guillefix 28th June 2016 at 4:01pm

While all aspects of mathematics can be potentially applied. Mathematical methods refers to those parts of mathematics designed to be applied.


This can also be called applied mathematics.

One important sub-area is industrial mathematics, mathematics applied to industry.

See https://www.siam.org/

Special functions and their properties: http://dlmf.nist.gov/

Mathematical modelling of neural networks

guillefix 18th February 2016 at 12:48am

Deep learning is an area of machine learning that studies learning algorithms with multiple levels of abstraction

Why Deep Learning models perform so well?

Seems to be a result of:

  • Very large datasets
  • Increasing computing power
  • Flexibility of the models. Lots of parameters when lots of layers. Furthermore multiple layers avoide the curse of dimensionality

Mathematical difficulty because: Nonlinearly, non-convexity (convex optimization or complex analysis techniques not available), many d.o.f.

ResultsL

  • Universality
  • Loss-function landscape

Neural network composed of neurons.

Data into Dendrites, scales. Axom computes (apply nonlinear function) and propagates output trough synapse.

A multilayer feedforward neural network.

L+2 layers. L hidden

The neural network is just a funciton from RNR^N to RMR^M, M=1M=1 wlog..?..

Training, given dataset of inputs and outputs and want function to map these as well as possibly

Use Loss function and regulariser (penalization on size of parameters. Could also try to maximize sparsity, Occam's razor, bias towards simpler model. Also makes surface more convex).

Then minimize empirical risk. To minimize we use stochastic gradient descent.

Assuming function as being continuity, differentiability, convexity.

Can a multilayer feedworward network f approximate g arbitrarily well, for a very general g.

Universality

We can't expect f for the model considered (one layer) to approximate any g whatsoever, there are some very pathological functions.We can assume g is continuous, or just Lebesgue measurable (use this metric for defining closeness in this case).

We can show then f can approximate g approximately well.

Many other models are also known to be universal.

Other minima.

Loss surface is the surface defined by the empirical risk, EM..

The epigraph is non-convex.

Local minima o EM are known to abound.

Results:

  • For large-scale neworks most local minima are equivalne and yield similar performance on a test set.
  • The probabilliy of finding a bad local minimum is non zero for small networks and decreases quickly with network size. The higher dimension the dimension the lower the probability that all curvatures are positive, so more saddle points and less minima... Hmmmmm
  • Struggling to find the global minimum on the training set is not useful in practice and may lead to overfitting.

Other results: only a few parameters matter.

The manifold hypothesis: meaningful data often concentrates on a low dimensional manifold, so large amounts of parameters don't matter.

-—>See dissertation topic proposed by Ard Louis.

Energy propagating from node i through path j

Analogy between loss function of neural network and hamiltonian of spin glass.

(Multilayer: composition of functions.)

  • Minimizing the empirical risk is a good idea.
  • Neural networks may be over-parametrised.
  • But over-parametrisation gives these nice results about local minima

Mathematical physics

guillefix 11th June 2016 at 1:56pm

Mathematical software

guillefix 1st July 2016 at 2:08am

Mathematics

guillefix 29th May 2016 at 12:36am

Mathematics is the study of structures themselves. These are necessary in Science and in Art, as both require the invention of structures to either explain (and thus understand) the world, or for any other purpose (in the case of Art).

Mathematics, however, doesn't concern itself with the purposes or details of particular structures; rather, it commits itself with the abstract properties common among many structures.

It is both the Art and Science of the structure of structures. It studies many structures in the world, and creates an abstract structure to understand them. In this sense it is a Science. It also creates new unobserved abstract structures, often, generalizing observed ones. In this sense it is an Art.

Common structures

Mathematics is sometimes called formal sciences.

Useful resources and tools

Geogebra web app

Demos graphing calculator

FormulaSheet

Online equation editor

EquationMap

WolframAlpha

https://en.wikipedia.org/wiki/Category:Mathematics_portals

http://www.msri.org/web/msri/online-videos

Books

How to solve it - Polya

Street-fighting mathematics

People

http://math.ucr.edu/home/baez/

Steven Strogatz

http://euler.nmt.edu/~jstarret/

Other links:

https://jeremykun.com/

http://mathgl.sourceforge.net/doc_en/Main.html

http://www.theshapeofmath.com/princeton/dynsys

https://www0.maths.ox.ac.uk/courses

From http://bactra.org/thesis/single-spaced-thesis.pdf :

Formalizing intuitions (Quine 1961) insist, the goal [of formalizing some notion] is that the formal notion match the intuitive one in all theeasy cases; resolve the hard ones in ways which don't make us boggle; and let us frame simple and fruitfulgeneralizations.

Mathematics of networks

guillefix 29th March 2016 at 4:41pm

A network is a collection of nodes joined by edges. More generally, it is a collection of elements and their interactions. Most of the time, it has the same mathematical structure as a graph, GG, defined as an ordered pair (V,E)(V,E), where:

  • V={i}V=\{i\}, a set of nodes (a.k.a. vertices).
  • E={(i,j)V×V}E=\{(i,j) \in V \times V\}, a set of edges (a.k.a. links, tie,

etc.)

However, by interpreting an edge as a more general kind of relation, its mathematical structure can be a hypergraph. One can also have different types of vertices and edges defined for a network.

A simple network is a binary, undirected network that only has a single edge between a pair of nodes (i.e. no multi-edges), and doesn't have self-edges (a.k.a. self-loops).

Types of edges

Undirected: (i,j)(j,i)(i,j) \Leftrightarrow (j,i).
Directed: (i,j)(j,i)(i,j) \nLeftrightarrow (j,i)
Weighted: edges can have any real value associated.
Unweighted: can only have 0 or 1 (a.k.a. binary).

Representations

Representations: Edge lists, adjacency matrices(a.k.a. network matrix).

Adjacency matrix AA

Aij=1 if edge (j,i) existsA_{ij} = 1 \text{ if edge (j,i) exists}. Aij=0 if edge (j,i) doesn't existA_{ij} = 0 \text{ if edge (j,i) doesn't exist}

AT=AA^T=A if undirected.

AA describes same network if we permute columns and rows in the same way.

Weighted adjacency matrix (or weight matrix) WW assigns a weight to edges. Usually weight is a real number: w:ERw: E \rightarrow \mathbb{R}

"Topology" represented by AA.

"Geometry represented by WW.

Cocitation and bibliographic coupling in directed networks

Two useful matrices, derived from the directed network adjacency matrix AA are the following (both can be used to define adjacency matrices that are symmetric and thus undirected! \leftarrow easier to analyze):

Cocitation matrix: C=AATC=AA^T. Nodes related if there is a node that points to both.

Bibliographic coupling matrix: B=ATAB=A^TA. Nodes related if there is a node to which both point.

Common Types

Simple network, described above.

Acyclic networks have no cycles. A Directed Acyclic Graph (DAG) is a well known sub-type.

Hypergraphs are sets of elements with relations that include more than a pair of elements (i.e. they are members of a higher cartesian product).

Hypergraphs can equivalently be represented as Bipartite Networks, where there are two types of edges (a special case of a multipartite network, where there are many types). On the other hand, a multiplex network is one that has multiple types of edges.

Trees are connected (can reach all vertices following edges), undirected networks that contain no closed loops. A forest is a disconnected graph whose connected parts are trees.

A Planar network is a network that can be drawn on a plane without having any edges cross. It is a special case of a Spatial network.

Temporal networks are those for which the set of edges and/or nodes varies with a time parameter.

A Similarity network is one that expressed how similar entities (expressed as the nodes) are. The degree of similarity being the weight of the node.

Other Mathematical aspects

The degree, kik_i, of a vertex, ii, is the number of edges connected to the vertex.

Paths

A path in a network is a sequence of of nodes such that every pair of nodes in the sequence is connected by an edge in the network.

Definition of path, cycle, trail, circuit Definition is extended to directed case by only permitting traversing in the direction of edge. Note only directed graphs can have 2-cycles.

Components

A component is a subset of the network for which all pair of vertices have at least one path, and which is maximal (i.e no extra nodes can be added that preserve this property).

Independent paths, connectivity, and cut sets

Number of independent paths between two vertices (the connectivity) gives measure of how strongly connected they are. Paths can be vertex-independent if they share no vertex (other than starting or ending vertices), or edge-independent if they share no edge.

A vertex (edges) cut set is a set of vertices (edges) that if removed will disconnect a specified pair of vertices. A minimum cut set is the smallest such set for the vertices.

Graph laplacian

The graph laplacian is a useful quantity, derived from the adjacency matrix, which can be used to describe diffusion precesses in a network, as well as in problems of random walks, resistor netowkrs, graph partitioning and network connectivity.

Random walks

A random walk is a path across a network created by taking repeated random steps. They are usually allowed to traverse edges more than once, and visit vertices more than once. If note it is a self-avoiding random walk. They are mathematically connected to resistor networks.

Maths frontend web libraries

guillefix 9th July 2016 at 5:24am

Matlab

guillefix 26th March 2016 at 4:07am

Matrix

guillefix 5th July 2016 at 11:42pm

Matrix calculus

guillefix 18th July 2016 at 11:44pm

Matroid

guillefix 1st July 2016 at 5:25am

Matroid theory

A matroid is a structure that captures and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid, the most significant being in terms of independent sets, bases, circuits, closed sets or flats, closure operators, and rank functions.

Matroids as a Theory of Independence by Federico Ardila

See books on matroid theory

Mean field approximation to average number of phenotypes discovered in Wright-Fisher model

guillefix 26th April 2016 at 3:12pm

(See Arrival of the frequent for context)

See also Wright-Fisher model

The Hamming distance (i.e. the number of differing letters, or mutations) dd is then distributed binomially:

h(d)=(Ld)μd(1μ)Ldh(d) = \binom{L}{d} \mu^{d} (1-\mu)^{L-d}

The expected number of individuals with genotype pp that arises at generation tt can be written as:

mp(t)=iNd=1Lh(d)Φp(gi,si,d)=iNΦp~(gi,si)m_p (t) = \sum_i^N \sum_{d=1}^L h(d) \Phi_p (g_i, s_i, d) = \sum_i^N \tilde{\Phi_p} (g_i, s_i)Eq.1

where Φd(gi,si,d)\Phi_d (g_i, s_i, d) is the probability that a dd-fold mutation of genotype gig_i (selected for reproduction according to fitness 1+si1+s_i) generates an individual with phenotype pp. It takes into account the genotype-phenotype map. gig_i is the genotype of the iith member of the population, with a total of NN members. See derivation of this below:

As the number is distributed binomially, the average number is mp=N(probability for single offspring to get phenotype p)m_p = N(\text{probability for single offspring to get phenotype p}). Then we define Φp~(gi,si)=(the probability for the single offspring to get to phenotype p \tilde{\Phi_p}(g_i, s_i) = (\text{the probability for the single offspring to get to phenotype p} given it inherits a mutated version of parent i)\text{given it inherits a mutated version of parent i}). Furthermore, (probability for single offspring to get phenotype p)(\text{probability for single offspring to get phenotype p}) = i=1N(probability of single offspring to get phenotype p through parent i) \sum_{i=1}^N (\text{probability of single offspring to get phenotype p through parent } i) = i=1NΦp~(gi,si)×(probability to inherit from parent i)\sum_{i=1}^N \tilde{\Phi_p}(g_i, s_i) \times (\text{probability to inherit from parent } i) = i=1NΦp~(gi,si)(1+si)j=1N(1+sj)\sum_{i=1}^N \tilde{\Phi_p}(g_i, s_i) \frac{(1+s_i)}{\sum_{j=1}^N (1+s_j)}. Finally,

mp=N(probability for single offspring to get phenotype p)m_p = N(\text{probability for single offspring to get phenotype p}) = i=1NΦp~(gi,si)N(1+si)j=1N(1+sj)\sum_{i=1}^N \tilde{\Phi_p}(g_i, s_i) \frac{N(1+s_i)}{\sum_{j=1}^N (1+s_j)} i=1NΦp(gi,si)\equiv \sum_{i=1}^N \Phi_p'(g_i, s_i)

By fine-graining the transitions from gig_i to a phenotype-pp genotype into transitions with particular mutation numbers dd, we can write Φp(gi,si)d=1LΦp(gi,si,d)\Phi_p'(g_i, s_i) \equiv \sum_{d=1}^L \Phi_p (g_i, s_i, d), recovering Eq. 1

[#[manual links]] (try to upgrade TW to make this work)

The actual number of individuals with genotype pp will follow a binomial distribution (as explained for a simple case in Wright-Fisher model), with probability mp(t)/Nm_p(t)/N, and number of trials NN. The probability of none of the offspring having phenotype pp is: (1mp(t)/N)Nemp(t)(1-m_p(t)/N)^N \approx e^{-m_p(t)}, the approximation holds for large NN, and may be seen as approximating the Binomial distribution by a Poisson distribution.

If we assume that Ld1Ld \ll 1, i.e. the average number of mutations per genotype is very small, then h(d)h(1)h(d) \ll h(1) for all d>1d>1, and h(1)Lμh(1) \approx L \mu (h(0)1h(0) \approx 1 while h(0)<1h(0) < 1 of course).

With the above assumption that Ld1Ld \ll 1, Φp(gi,si)=d=1Lh(d)Φp(gi,si,d)Φp(gi,si,0)+Φp(gi,si,1)Lμ\Phi_p'(g_i, s_i) = \sum_{d=1}^L h(d) \Phi_p (g_i, s_i, d) \approx \Phi_p (g_i, s_i, 0) + \Phi_p (g_i, s_i, 1) L\mu. Also, Φp(gi,si,0)=0\Phi_p (g_i, s_i, 0) = 0, if pqp \neq q. Next, if we assume, si=0s_i = 0, for all ii with gig_i mapping to phenotype qq (i.e. in space Nq\mathcal{N}_q), and that it all starts within Nq\mathcal{N}_q, we have

mp(t)=i=1NΦp(gi,si)i=1NΦp(gi,0,1)Lμm_p(t) = \sum_{i=1}^N \Phi_p'(g_i, s_i) \approx \sum_{i=1}^N \Phi_p (g_i, 0, 1) L\muEq.2

We can also define the averaged {expected number of offspring with phenotype pp at one generation, which inherited from genotype gig_i at the previous generation via a single mutation}, i.e the average of Φp(gi,0,1)\Phi_p(g_i, 0, 1), over all gig_i in Nq\mathcal{N}_q. We will write abuse notation, and use the label ii in gig_i to label a genotype in Nq\mathcal{N}_q, so that i=1,2,...Nqi = 1, 2, ... N_q. The average is then:

Φpq=1Nqi=1NqΦp(gi,0,1)\Phi_{pq} = \frac{1}{N_q}\sum_{i=1}^{N_q} \Phi_p(g_i, 0, 1)

Furtheremore, we should note that, as Φp(gi,si)=Φp~(gi,si)N(1+si)j=1N(1+sj)\Phi_p'(g_i, s_i) = \tilde{\Phi_p}(g_i, s_i) \frac{N(1+s_i)}{\sum_{j=1}^N (1+s_j)} (and a similar expression for the dd dependent quantities). When si=0s_i = 0, we find Φp(gi,si)=Φp~(gi,si)\Phi_p'(g_i, s_i) = \tilde{\Phi_p}(g_i, s_i), and also, for example, that Φp(gi,0,1)=Φp~(gi,0,1)\Phi_p(g_i, 0, 1) = \tilde{\Phi_p}(g_i, 0, 1), where Φp~(gi,si,d)=(the probability for the single offspring to get to phenotype p\tilde{\Phi_p}(g_i, s_i, d) = (\text{the probability for the single offspring to get to phenotype p} given it inherits a mutated version of parent i, via a single-point mutation (d=1))\text{given it inherits a mutated version of parent i, via a single-point mutation (d=1)}). Thus Φpq\Phi_{pq} is the average of this probability.

We also define the robustness of phenotype qq, ρ\rho as equal to the average probability over all Nq\mathcal{N}_q of a neutral mutation (i.e. one from Nq\mathcal{N}_q to Nq\mathcal{N}_q). Under the approximate assumptions above, Φqqρ\Phi_{qq} \approx \rho. If we assume also that the population is large enough (more precisely, we are in the Polymorphic limit (Wright-Fisher model)), we can use a mean field approximation: approximate Φp(gi,0,1) \Phi_p (g_i, 0, 1) by Φpq\Phi_{pq}. This approximate works best if the population is large enough that most of the neutral space Nq\mathcal{N}_q is populated (or in the author of the paper word's "1-mutant neighbourhood of the population is similar to that of the whole neutral space"). Using this in Eq.2:

mp(t)Lμi=1NΦpq=NLμΦpqm_p(t) \approx L\mu \sum_{i=1}^N \Phi_{pq} = N L\mu \Phi_{pq}Eq.3

Mean field theory

guillefix 9th February 2016 at 4:53pm

Statistical field theory that ignores fluctuations. I.e. just describes the behaviour of the mean quantities of interest. Can get such behaviour by applying the method of steepest descents to the partition function.

Examples

Regular solution model

Bragg-Williams theory for binary alloys or Ising model (similar to above).

Curie-Weiss theory for the paramagnetic-ferromagnetic phase transition.

Measurable function

guillefix 7th July 2016 at 6:51pm

See Measure theory

A measurable function between two sets XX and YY, belonging to Measurable spaces (X,A)(X, A), and (Y,B)(Y, B), is {a Function f:XYf: X \rightarrow Y, s.t. for any EBE \in B, the Preimage of EE is in AA}. I.e. the preimage of any set in the Sigma-algebra of the co-domain is in the Sigma-algebra of the domain.


https://en.wikipedia.org/wiki/Measurable_function

Measurable space

guillefix 7th July 2016 at 6:22pm

See Measure theory

A space consisting of a set Ω\Omega, and a Sigma-algebra A\mathcal{A}.

Measure

guillefix 14th July 2016 at 3:34pm

A measure μ\mu on a set Ω\Omega, with Sigma-algebra A\mathcal{A}, is a Function μ:A[0,]\mu: \mathcal{A} \rightarrow [0, \infty], s.t.

  1. μ()=0\mu(\emptyset)=0
  2. countable additivity. μ(i=1Ei)=i=1μ(Ei)\mu(\bigcup\limits_{i=1}^{\infty} E_i) = \sum\limits_{i=1}^{\infty} \mu(E_i) for any collection E1,E2,...AE_1, E_2, ... \in \mathcal{A} of pair disjoint sets.

Specifying a measure on a sigma-algebra is simplified by the

Types of measures


Definition

(PP 1.4) Measure theory: Examples of Measures

(PP 1.5) Measure theory: Basic Properties of Measures

Measure space

guillefix 14th July 2016 at 3:24pm

Measure theory

guillefix 15th July 2016 at 9:39pm

Measure-theoretical dynamical system

guillefix 7th July 2016 at 9:03pm

A Measure-theoretical dynamical system is comprised of:

This space can be considered, without restriction to be a Probability space. See Amigo's book.

See here (local).

A Dynamical system on a Measurable space has a natural or physical invariant measure, corresponding to the Probability measure that numerical simulations of the system would produce asymptotically.

Measures and metrics for networks

guillefix 16th February 2016 at 12:31am

If we know the structure of a network, then we can calculate a number of quantities or measures that capture features of the network topology (and geometry). Originally, a lot of these ideas were developed for social network analysis, but they are used elsewhere now too.

Centrality measures

Trying to answer: "Which are the most important or central vertices (or edges, or other substructures) in a network?"

Degree centrality

Simply the degree of a vertex can be used as a measure of its centrality.

Eigenvector centrality

The eigenvector centrality (first defined by Bonacich in 1987), is defined by:

Ax=κ1x\mathbf{A}\mathbf{x}=\kappa_1 \mathbf{x}

where x\mathbf{x} is the vector of centralities, and κ\kappa is the largest eigenvalue of A\mathbf{A}

A node can be important because it is connected to many nodes, or because it is connected to important nodes, or both.

Katz centrality

Katz centrality solves the problem posed above by giving all vertices a "free" centrality:

x=αAx+β1\mathbf{x}=\alpha\mathbf{A}\mathbf{x}+\beta \mathbf{1}

PageRank

There is one potentially undesirable feature of Katz centrality. An important vertex pointing to many vertices makes all those vertices important. The centrality gained by virtue of receiving an edge from a prestigious vertex is diluted by being shared with so many others (think a web directory like Google or Yahoo! pointing to my page. My page is not that central because it's just one of millions). We can solve this by making the centrality derived from neighbours be divided by their out degree:

xi=αjAijkjoutxj+βx_i = \alpha \sum_j \frac{A_{ij}}{k^{\text{out}}_j}x_j +\beta

which is the basis for PageRank

Hubs and authorities (Network theory)

One can distinguish two types of important nodes in directed networks. We describe them for the case of an information network, like WWW first:

  • authorities are nodes that contain useful information on a topic of interest
  • hubs are nodes that point us to the best authorities

This idea was implemented by Kleinberg into the hyperlink-induced topic search or HITS algorithm.

Closeness centrality

Closeness centraliy of node i is the mean geodesic distance to all others nodes in he network. A variant is exponentially weighted closeness centrality:

CC(i)=hGi22LijC_C (i) = \sum_{h \in G_i} 2^{-2L_{ij}}

where LijL_{ij} is the geodesic distance between node ii and jj; and GiG_i is the connected network component reachable from ii (except for ii).

Main disadvantage is its often very low dynamic range (range of values it takes)

There are also problems when there are disconnected components. One way is to define closeness centrality over only connected nodes, or to use harmonic mean (mean of reciprocals, ignoring self distance, as it's 0).

Betweeness centrality

Measures the extent to which a node (or edge, or other substructure) lies on paths between other vertices. These paths can be defined in many ways, but often they are taken to be geodesic paths.

Groups of vertices

Many networks naturally divide into groups. These are substructures that are prominent for some reason. Simple examples are cliques, plexes and cores. There are also generalizations of components called k-components.

Transitivity

Transitivity (a property of mathematical relations) in a network is usually applied to the relation "is connected by an edge". So a network is transitive if for every u connected to v and v connected to w, then u is connected to w. One can define the clustering coefficient, CC, as a measure of "how often" transitivity holds in the network:

C=(number of triangles)×3number of connected triplesC=\frac{\text{(number of triangles)}\times 3}{\text{number of connected triples}}

Reciprocity

For directed simplest graph the smallest loop size is two, instead of three, and thus one often measures the frequency of length-2 loops. This is called reciprocity (see Transitivity for more comments). Pairs of reciprocated edges (that is edges from i to j where there is also one from j to i) are sometimes called co-links. The reciprocity is defined as the fraction of edges that are reciprocated, and this turns out to equal 1mTrA2\frac{1}{m} \text{Tr}{\mathbf{A}^2}.

Signed edges and structural balance

Signed neworks have signed edges, that is their edges can have an associated weight +1+1 (like friendship) or =1=1 (like animosity).

Structural balance refers to the situation when the network contains only loops with even numbers of minus signs. This is so that the (naturally generalized version of the) rule "the enemies of my enemies are my friends", and "the friends of my friends are my friends". This is similar to the concept of "frustration" in spin networks

Harary's theorem tells us that all balanced networks are clusterable, i.e. they can be divided into groups with only positive connections within groups and negative between them. Proof given in Newman's book and gives further intuition of concept of balance.

Similarity

How can we measure the "similarity" of two nodes (or edges, etc.)? Two main approaches. Two nodes may be:

  • structurally equivalent: if they share many of the same network neighbours.
  • regularly equivalent: have neighbours who are themselves similar.

Homophily or assortative mixing

Homophily or assortative mixing is a bias in favour of connections between network nodes with some similar characteristics.

Mechanical engineering

guillefix 23rd May 2016 at 11:21pm

Mechanics

guillefix 1st June 2016 at 7:14pm

In mechanics, we describe the motion of bodies, and the causes that effect them. This includes the special case where the "motion" is no motion, i.e. bodies are stationary.

The description of the motion itself is called kinematics. This just sets up the relevant degrees of freedom, represented as variables in a relevant mathematical form.

The description of the causes, and how these causes effect the motion is called dynamics. These causes are often divided into forces and torques. This description relates the variables describing the motion above, to forces, which should depend on those variables themselves. This means that in dynamics we often have closed equations that we can solve in full generality.

Rotational dynamics


Another division of the areas of classical mechanics, used mostly in engineering leaves the definition of kinematics the same, but what we referred to as dynamics above is called kinetics

Dynamics then refers to mechanics applied to proper motion only (i.e. not including stationary case). In other words, dynamics is the kinematics and kinetics of proper motion.

Mechanics applied to the stationary case is referred to as statics. In other words, statics is the kinematics and kinetics of static equilibrium.


See the mechanical universe

Mechanistic target of rapamycin

guillefix 22nd April 2016 at 11:58pm

https://en.wikipedia.org/wiki/Mechanistic_target_of_rapamycin

A Kinase that regulates cell growth, cell proliferation, cell motility, cell survival, protein synthesis, autophagy, transcription.

It's signalling circuit has been studied, for instance as an example of GP map bias: Evolvability and robustness in a complex signalling circuit.

Medicine

guillefix 24th April 2016 at 12:46am

Meet operation

guillefix 14th July 2016 at 1:27am

A meet, \wedge is an operation defined on elements of a poset PP (not necessarily all of them) defined as:

The join (or Greatest lower bound) of a,bPa, b \in P is an element abpa \wedge b \in p such that:

(a) aba \wedge b is a lower bound of aa and bb: thus aba \wedge b \preceq a and abb a \vee b \preceq b;
(b) aba \wedge b is the greatest such lower bound: i.e., if there exists cPc \in P such that cac \preceq a and cbc \preceq b then cab c \preceq a \vee b.

Note that, if it exists, a join is necessarily unique.

See also Lattice (algebraic structure)

Membrane protein

guillefix 2nd July 2016 at 2:11pm

A membrane protein, or membrane-bound protein, is a protein bound to the Cell membrane

Memory allocation

guillefix 30th June 2016 at 1:46am

a.k.a. memory management

The Operating system allocates memory to processes, so that a process can only access that portion of memory.

This memory is divided into:

The addresses that the program uses to reference variables are actually Virtual memory addresses, which the operating system translates to physical memory.

https://en.wikipedia.org/wiki/Memory_management


What and where are the stack and heap?

http://stackoverflow.com/questions/18446171/how-do-compilers-assign-memory-addresses-to-variables

Memory heap

guillefix 30th June 2016 at 1:46am

Meta

guillefix 16th June 2016 at 8:29pm

This tiddler is about this TiddlyWiki itself.

BACKUPS

Upgrade TW

Creator, he lives in Oxford!

https://github.com/Jermolene/TiddlyWiki5/issues/2180 plugin or feature request: inner-tiddler-anchors

Manual links (for inter-tiddler links)! Check how to implement this, maybe I need to Upgrade TW?

Creating SubStories

Creating A Tabbed ToC

TiddlyWeb

ServerCommand hmm?

Nice collection of plugins

WOW TiddlyMap

TiddlyClip

Nice too! Codemirror editor Install this

Custom stylesheet.

In here I have an example of a workaround to get javascript working on there. Even though script tags are supposed to be removed, they aren't when inside an iframe. However, I need then to substitute document \rightarrow window.parent.document to access stuff in our document. Maybe I can define a Javascript Macro like the one below that takes javascript code as input, and does the right things and adds the iframe dressing, etc.!

Also should think of adding jquery, via a custom plugin..

Trick to embed stuff from other pages using iframes (and position the embedded content correctly). See example Apollonian gasket

Font Awesome: $:/plugins/TheDiveO/FontAwesome/fonts/FontAwesome TW on Font Awesome for TW

Example of custom Macro

$:/core/modules/macros/testMacro

This is an example of a (global) Javascript macro, as one can define local ones.

To use it, do in any tiddler, where this isn't the name of the tiddler, but the name defined inside it in exports.name = "testMacro";. It then runs the exports.run function.

TODO

  • Change hosting of linked google photos pictures to something else that is public
  • http://blog.jeffreykishner.com/2014/01/23/how-to-incorporate-font-awesome-icons-into-tiddlywiki-5.html
  • Create plugin/custom button to add child to tiddler (child defined via tags). I know this is a hierarchical idea, but still useful!
  • Also show backlinked stuff.
  • Make Table of contents Tiddler
  • Add automatic link to wiki page in tiddlers.
  • Scalability? There is a server side tiddlywiki apparently. At the moment in 15 days it increased by 1M. Therefore in a month, 2M. Therefore in a year 24M. I think the biggest thing I added was the pdf: The Hallmarks of Aging.pdf...which increased it by about 5M straight away. Big files, and pictures should be hosted somewhere else and linked.. There are TW with 1000s of tiddlers apparently that go well.
  • References. How to add LaTeX-like referencing..
  • See CosmosBrain created in The Brain
  • Define root path for offline file:// links!
  • Find way of saving current color scheme, and try other ones. Like black on white (better for reading).
  • Subdivide tiddlers more.
  • Learn more about tiddlywiki and how to hack/modify it.
  • TW is kinda a OS on the web, can we give it more features like better window (open tiddler) management, etc. Tiddlers are like files. Shadow tiddlers, macros, etc. are programs.
  • Add AI stuff like http://home.ideapad.io and concept map stuff! :) See also http://ideaflowplan.tk/

LaTeX test: x2=3x^2=3

Font Awesome Test: Waving flag:

Metal

guillefix 7th May 2016 at 5:54pm

A material made of atoms bonded by metallic bonds (see Chemical bonds)

A metallic element is one that forms a metal when in its pure solid state.

Pure metallic crystalline solids are found almost always in either BCC, FCC, and sometimes (HCP) crystal arrangements. Furthermore, BCC, and FCC appear only in metals, at least, when looking at pure element crystals (see Periodic table (crystal structure)

Metaphysics

guillefix 8th July 2016 at 2:57am

Study of the nature of Nature.

See Philosophy for the basis of my metaphysics.

Basically my ontology is based on two levels, depending on certainty (not sure if these are the best names):

  1. Observer perspective. The most certain parts of Knowledge are those pertaining to what you experience.
  2. God-like perspective. The next level in certainty is about things we infer from what we experience. This is where most of Knowledge resides. This deduction uses tools like Epistemology, Logic, Science, etc. to infer our Knowledge of Reality, or Physical World.

The physical world is based purely on primary substances (concrete things of the physical world. Other things are just emergent properties of it, including us and our thoughts. Those thoughts is where abstract Concepts, Knowledge

I think a good way to approach metaphysics is via Systems theory, and Science

Introduction to Metaphysics

Topics in metaphysics

Aristotle's metaphysics

Continental rationalists and metaphysics

Decartes, Leibniz, Spinoza

  • General metaphysics, or Ontology, the study of being or existence.
  • Special metaphysics
    • Cosmology
    • Rational psychology
    • Natural theology

https://en.wikipedia.org/wiki/Metaphysics

Meteor (JS)

guillefix 30th June 2016 at 1:06am

Method of multiple scales

guillefix 2nd May 2016 at 2:27pm

Assume functions in the asymptotic expansion depend on tt through variables, corresponding to different time scales: T0=tT_0 = t, T1=ϵtT_1=\epsilon t, T2=ϵ2tT_2 = \epsilon^2 t, etc.

Method of stationary phase

guillefix 28th April 2016 at 2:32am

I(x)=abf(t)eixψ(t)dtI(x) = \int_a^b f(t) e^{ix\psi(t)} dt   as xx\rightarrow \infty

with ψ(t)\psi(t) real.

Uses Riemann-Lebesgue lemma:

Riemann-Lebesgue lemma

... Useful also when doing integration by parts for Asymptotic approximation of integrals

See statement on notes

Method of stationary phase

Split integral into region close to stationary phase point(s) and the rest. Then it's similar to Laplace method

See example in notes..

Important notes

  • The error terms are only algebraically small, not exponentially small as in Laplace method.
  • Higher-order corrections are very hard to get since they may come from the whole range of integration.This is in contrast to Laplace method where the full asymptotic expansion depends only on the local region because the errors are exponentially small.

Method of steepest descents

guillefix 28th April 2016 at 3:18pm

I(x)=abf(t)exϕ(t)dtI(x) = \int_a^b f(t) e^{x\phi(t)} dt   as xx\rightarrow \infty

For ϕ(t)\phi(t), f(t)f(t) are generally complex, and the integral being along a complex contour in general.

See justification in notes..

Also: Handouts from lecture

Steepest descent contour refers to the contour of steepest descent of uu, the real part of ϕ(t)=u(t)+iv(t)\phi(t) = u(t) +iv(t). That is, the contour parallel to its gradient, u(t)\nabla u(t) . This is because, for ϕ(t)\phi(t) an analytic function, u(t)\nabla u(t) is perpendicular to v(t)\nabla v(t) , so that the steepest descent contour is also a contour of constant imaginary part of ϕ(t)\phi(t). This later condition (together with others, depending on the problem) is often used to find the contour.

The other conditions may be:

  • If the contour goes from one valley (valley in the uu landscape, which can only have saddle points, due to Cauchy–Riemann equations) to another valley, then the steepest descent contour must pass through a saddle point (in uu landscape). This is because in one valley du/ds<0du/ds<0 (ss being distance along path), and in the other du/ds>0du/ds>0. Therefore, at some point, du/ds=0du/ds=0. However, in a steepest descent contour dv/ds=0dv/ds=0 everywhere. Therefore, at that point du/ds=0du/ds=0 and dv/ds=0dv/ds=0, which implies that u(t)\nabla u(t) , so it is a saddle point. In this situation, the Laplace integral will get contribution from the saddle points (where uu is largest).
  • If the contour starts at a certain point, not infinity, then that point has to be kept, and the Laplace integral may get a contribution from it.

Method of steepest descents

1. Deform the contour to be the steepest descent contour through the relevant saddle node(s).
2. Evaluate the local contribution from the saddle, exactly as in Laplace method.
3. Evaluate the local contribution from the end points, exactly as in Laplace method.

Remember that when deforming the contour we must include the contribution from any poles that we cross.

Example: Steepest descents on the gamma function

Example: Steepest descents on the Airy function

Both of these are moveable saddle problems, so we first need to rescale variables, so that the saddle is fixed.

Revise Watson lemma, and examples

Metric

guillefix 14th July 2016 at 12:41am

A metric on a Set XX is a map d:X×XRd: X \times X \rightarrow \mathbb{R} (i.e. from the Cartesian square of XX to the Real numbers), that satisfies the conditions:

  • symmetry: d(x,y)=d(y,x)d(x,y) = d(y,x)
  • positivity: d(x,y)0d(x,y) \geq 0, and =0=0 if, and only if, x=yx=y.
  • Triangle inequality: d(x,y)d(x,z)+d(z,y)d(x,y) \leq d(x,z) + d(z,y).

Metric space

guillefix 14th July 2016 at 12:42am

A Set with a Metric function.

It can be used to define notions of Convergence, and Open sets.

Because a metric space has a natural notion of open set, it also has a natural topology

MHD_waves1.png

guillefix 1st February 2016 at 12:59am

Microeconomics

guillefix 6th February 2016 at 1:16am

Microhydrodynamics

guillefix 2nd July 2016 at 4:49am

Microsystems and nanosystems engineering

guillefix 29th June 2016 at 6:39pm

Microtubule

guillefix 11th May 2016 at 12:43am

Microtubules (micro- + tube + -ule) are a component of the cytoskeleton, found throughout the cytoplasm.

These tubular polymers of tubulin can grow as long as 50 micrometres and are highly dynamic. The outer diameter of a microtubule is about 24 nm while the inner diameter is about 12 nm. They are found in eukaryotic cells, as well as some bacteria, and are formed by the polymerization of a dimer of two globular proteins, alpha and beta tubulin.

(https://en.wikipedia.org/wiki/Microtubule)

Some times of Molecular motors walk along microtubules to transport molecular cargo inside a cell.

They are the primary component of the Spindle which separates chromosomes during cellular division.

Microtubule turnover: the process by which microtubules decay, and are replaced. "Turnover" can also refer to the rate of this process. See turn over (definition): To be replaced by something else of the same kind. See here.

Also microtubules page.

David Odde - Microtubule Self-Assembly

Milky Way

guillefix 5th July 2016 at 3:32am

The Milky Way is the Galaxy that contains our Solar System

Mind

guillefix 8th July 2016 at 2:24am

A complex system that has features associated with sentient, intelligent and conscious beings. It is capable of thought and emotion (see Philosophy, Cognitive science, Philosophy of mind).

It currently only physically realized in the Brain, but Computer-based versions are very likely possible, and are part of the transhumanist vision.

Miscellaneous notes from first Nando's first deep learning lecture

guillefix 26th April 2016 at 7:28pm

Inspiration from neuroscience -> neural networks.

Convolutional network. Matthew Zeiler & Rob Fergus

Supervised vs unsupervised.

A good principle for learning is for the machine trying to reconstruct the things it wants to learn using its neural net. If what it reconstructs doesn't agree with what it then sees, it should learn. This sounds like learning by imitation.

Regularity helps..

Multimodal, learning combining different kinds of data

Sequence learning and recurrent nets: have memory, can predict sequences (in time say). Can parse words, and they show that grammar can be learned.

Being able to fill gaps in the information you receive (like our brain does, or like machines do with generative models, which also learn) is useful for decision making, as you can know what to expect, even with incomplete info.

Siamese neuronal network Q

Reinforcement learning

Imitation learning

Back-propagation.

Mitosis

guillefix 22nd April 2016 at 9:46pm

MMathPhys

guillefix 14th July 2016 at 5:07pm

Outcome: Distinction (First class honours).

Exam marks

Year Assessment Code Assessment Assessment Type Mark Grade
2015/16A12169Nonlinear Systems (Combined)Overall Mark 77-
2015/16A12206Perturbation MethodsWritten 77-
2015/16A13117NetworksSubmission 73-
2015/16A15088Quantum Field TheoryWritten 100-
2015/16A15089Kinetic TheoryWritten 75-
2015/16A15091Scientific Computing ISubmission 70-
2015/16A15275Soft Matter PhysicsPractical-Pass
2015/16A15280Scientific Computing IISubmission 100-
2015/16A15282Nonequilibrium Statistical PhysicsWritten 80-
2015/16A15416Topics in Soft and Active Matter PhysicsPractical-Pass
2015/16A15417Complex SystemsSubmission 76-
2015/16A15430Oral PresentationOral-Pass


BACKUP LECTURE NOTES

Website

old unofficial website

Courses to take and course requirements

Hilary Term

Combined timetable No soft matter on Tuesday. Instead Friday at 12am

Lecture courses

Examination conventions

Handbook

Gingkoapp tree

Mathematical institute classes

  • Nonlinear systems: Class Tutor: Prof Irene Moroz Teaching Assistant: Mr Graham Benham. Thursdays wks 4,6,8 C4 2-3:30pm. Hand in written work by: Mondays 5pm in MI basement
  • Networks: Class Tutor: Mr Sewook Oh Teaching Assistant: Mr Marco Pangallo. Wednesdays, weeks 2–7, 14:00–15:00. Hand in written work by: Tuesday, 5pm. Now: attend on Fridays 10am

Trinity term

Timetable


The Oral Presentations will take place in week 5 of Trinity term not week 4.

The Trinity term mini-projects will be released at 12noon on Monday of week 6 and are to be submitted by 12noon on Monday of week 9 (rather than weeks 5 and 8 respectively).

The Trinity term take-home-exams are to be released at 12noon on Monday of week 9 of Trinity term and are to be submitted by 12noon on Wednesday of week 9 (rather than in week 5).


Exams timetable

  • 1st of June 14:30 - Nonequilibrium statistical physics (1.5h)
  • 6th of June 14:30 - Nonlinear systems (1.5h)
  • 9th of June 09:30 - Perturbation methods (1.5h)

MMathPhys miniprojects

guillefix 1st July 2016 at 5:03pm

Networks

On Spatial networks

In particular I looked at networks formed by the Physarum polycephalum, when connecting food sources. I used a mathematical model of these networks and looked at their features. They turn out to perform rather well under metrics of efficiency and robustness. They also display typical features of Spatial networks, in particular Planar networks.

See Physarum machines and physarum solver, and project in Overleaf. See code in Dropbox.



Complex systems

On Percolation

In particular, on the Relations between the stability of Boolean networks and percolation



Nonlinear systems

On the Duffing oscillator

The effects of small damping, nonlinearity and forcing on a harmonic oscillator:

x¨+βx˙+x+δx3=Γcosωt\ddot{x} + \beta \dot{x} + x + \delta x^3 = \Gamma \cos{\omega t}

  • The simple harmonic oscillator (forced and damped, in general)
  • Duffing oscillator
    • Free (unforced) Duffing oscillator
      • Free undamped Duffing oscillator
      • Free damped Duffing oscillator
    • Forced damped Duffing oscillator.

There are potentially 88 qualitatively different forms of the equation, depending of which combination of the 33 parameters considered are non-zero.

The Duffing Equation: Nonlinear Oscillators and their Behaviour

References on the Duffing oscillator

MMathPhys oral presentation

guillefix 21st July 2016 at 3:08pm

The presentation should not be longer than 20-25 minutes and there will be a 5-10 minute discussion session after the presentation. You are free to choose whether you want to give a blackboard presentation or use slides. Timetable My presentation is on Friday 27th May in L5, at 12:30. Practice it

See slides at slides for a nice and ordered presentation of the ideas.

Evolution

Modern evolutionary synthesis

Contingency, convergence and hyper-astronomical numbers in biological evolution Notes on Ard Louis' paper on contingency, convergence and hyper-astronomical numbers in biological evolution

A.A. Louis: Publications

Hunting Darwin's Snark: which maps shall we use?

Bias in GP maps

The effect found in many Genotype-phenotype maps by which some phenotypes have many more corresponding genotypes, than other phenotypes. This effect is important in Evolution

See MMathPhys oral presentation

Examples of GP map bias

Simplicity bias

Effects of bias in GP maps

Arrival of the frequent

The Arrival of the Frequent: How Bias in Genotype-Phenotype Maps Can Steer Populations to Local Optima pdf

The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA pdf. Notes on the RNA GP map bias paper

Common features of GP maps

Examples of GP map bias

Origin of bias in GP maps


Genotype-phenotype map (GP map)


See Descriptional complexity

Evolutionary Robotics and computing, uses GPMs. See Evolutionary computing and Optimization .. See References from Complex Behavior in Evolutionary Robotics book


Survival of the flattest

An effect, where effectively large neutral spaces are also favoured, but in equilibrium, not out of equilibrium as in the Arrival of the frequent


More

Convergent evolution as natural experiment: the tape of life reconsidered

Applications to Deep learning and ANNs? Chico's application to networks. His slides

Relation b/w bias for simplicity in GP maps, and regularization in Machine learning.

Genotype is the weights of the NN, phenotype is the function the NN approximates. NNs are expected to find "simple" functions much easier then I suppose. In other words, they are able to recognize patterns much more easily if there is actually a pattern (in the sense of a simple pattern..)



Sloppy systems

Model theory

guillefix 29th June 2016 at 7:21pm

See also finite model theory

Models of network formation

guillefix 17th March 2016 at 11:04pm

Models that describe the processes by which a network forms or is generated are often called generative network models. One of the most famous ones is the "preferential attachment" model.

Preferential attachment

related to "rich get richer" idea in economics (Herbert Simon).

Preferential attachment (also called cumulative advantage in older literature) refers to the idea that new nodes in a network preferentially attach themselves to some nodes in the existing network rather than others.

The attachment is described in terms of a probability distribution over existing nodes for the creation of an edge. The preference is described by an attachment kernel, aia_i, which is the probabilistic weight of node ii. The probability that a new node connects to existing node ii is thus:

qi=aiiaiq_i=\frac{a_i}{\sum_i a_i}

Different preference types can be considered, the main categories being:

  • Structural properties. The most common one is degree (higher prefered). These can be expressed in a vector over nodes: α\vec{\alpha}.
  • Other properties. Most common one is fitness. Fitness refers to some inherent quantity assigned to each node at its creation, and that is independent of network structure. Another example, could be external factors. We can also write these in a vector: η\vec{\eta}.

The attachment kernel is then generally a function of these: ai=ai(αi,ηi)a_i=a_i(\alpha_i, \eta_i).

Note: We need a seed network (initial condition), to get any network out of this model. The network will eventually be independent of the seed, but this can take a very large number of nodes NN, sometimes in the order of billions.

de Solla Price's model (dSP model)

Proposed in the study of citation networks.

The main assumption of the model is that the probability of each new edge created whew we add a new node only depends on the degree of that node (on the in-degree to be precise, i.e. the number of citations it has). In particular it assumes an affine preferential attachment:

qi=ki+ai(ki+a)=ki+aN(a+c)q_i=\frac{k_i+a}{\sum_i(k_i+a)}=\frac{k_i+a}{N(a+c)}

One can write a master equation for the degree distribution, which has a steady state (i.e. NN \rightarrow \infty behavior given by power-law decay with power:

α=2+ac\alpha=2+\frac{a}{c}.

Thus, many scholars believe that this simple model may describe the fundamental mechanism by which power laws are obtained in many real-world networks.

Barabási–Albert (BA) model

Almost a special case of de Solla Price model, but with new assumptions:

  • undirected edges
  • exactly cc new edges per new node.
  • attachment kernel (a.k.) is now exactly proportional to degree (undirected). Note that now degree is always greater than cc. In terms of the parameters of the dSP model, me add an ancillary direction to the edges of the BA model from new to old nodes, then the a.k. is proportional to ki=kiin+ck_i=k_i^{\text{in}}+c where kik_i is now the total degree, and cc is the out-degree. We see that cc plays the role of aa. the exponent for the power-law tail is thus α=3\alpha=3.

Other properties of preferential attachment models

Degree distribution as a function of time of creation

Nodes that were added earlier to the network have had more time for new nodes to attach to them, and thus in average have higher in-degree.

This can be shown by starting with a new quantity: the fraction of nodes (in average over the ensemble, so effectively the probabilty ) that a node was created at time tt and has in-degree kk when the network has nn vertices, pk(n,t)p_k(n,t). The "time" tt $increments by 11 every time we add a node, and thus effectively labels nodes, in the order by which they were added.

One can then write a master equation, noting that no nodes have t>nt>n, except the new node which has t=n+1t=n+1, and also in-degree 00. However the fraction of nodes having any being created at any particular time goes to 00 as nn \rightarrow \infty, and so we change variables to a probability density in tt by dividing pp by nn. We also rescale time by dividing by nn for convenience, and to properly convert the master equation into a differential equation.

Sizes of in-components

Can also derive a master equation. See homework problem 4

Extensions of preferential attachment models

  • Edges (like hyperlinks) may also disappear. They may also appear at times after the nodes are added.
  • Nodes may also disappear (like websites).
  • Preferential attachment could be non-linear on degree, or it could depend on other network property of the node.

Vertex copying models

Kleinberg et al have proposed a model where new nodes imitate the out-edge configuration of an existing node. This is done by linking to some of that edge's neighbours, while the rest of connections are to randomly chosen nodes in the network. In particular, we first choose a node uniformly at random, and then go through its edges, copying it with probability γ\gamma, or ignoring it and choosing a node at random with probability 1γ1-\gamma. Remarkably, the expression for the fraction of nodes with degree kk when node size is nn has the same form as in Price's model, but with an aa give by an expression depending on γ\gamma, and thus it also follows a power law. The networks still differ in other structural aspects, in particular regarding correlations.

This model reminds us that just knowing the degree distribution, doesn't tell us the mechanism that gave rise to it. We need more information to make this inference.

In some biological networks (metabolic and protein-protein networks) vertex copying seems to be the most probable explanation for observed power law distributions. The mechanism by which this happens is gene duplication (by which, when copying DNA, a gene is duplicated by mistake) and point mutations (a mutation of a single base pair). This, through evolution creates different proteins, which (due to their common origin) are still similar and have a lot of protein-protein interactions in common.

Observations of power law in protein and metabolic networks:

Lethality and centrality in protein networks

The large-scale organization of metabolic networks

Proposed models

A Model of Large-Scale Proteome Evolutionhttp://www.santafe.edu/media/workingpapers/01-08-041.pdf

Modeling of protein interaction networks

Network optimization

An alternative networks may "form". Often these are rationally created networks to optimize toward some goal.

Travel time and cost trade-offs

A good example is: airline networks where a compromise between lowering cost (so having more central hubs and spokes to fill planes more fully, than flights between two minor destinations) and length of travel (to satisfy customers) is sought.

Ferrer i Cancho have one such simple model to find compromises between mean geodesic distance (travel time) and number of edges (cost). They find interesting regimes with local minima with trees with exponential distributions, passing through trees with power law distributions, and finally star graphs, as they varied the parameter controlling the relative importance of the two compromising variables. However, for most values of the parameter, the global minimum was actually the star graph.

An alternative model that shows interesting behavior in the global minimum too, by assigning an actual geometric distance to the edges (so that it is a spatial network, see MMathPhys miniprojects.Networks). Depending on whether they assigned more importances to travelling times, or to waiting times at nodes, they got more road-like networks (waiting times at intersections negligible) or more airline-like networks (waiting times significant).

See recent research: Like air traffic, information flows through neuron 'hubs' in the brain, finds IU study

Molecular biology

guillefix 16th May 2016 at 9:04pm

Molecular motors

guillefix 10th May 2016 at 6:47pm

Molecular motor proteins:

  • Kinesin. Moves along Microtubules towards positive end (mostly towards periphery)
  • Myosin. Moves along actin filamens
  • Dynein. Moves along microtubules towards negative end (mostly towards inside of cell)

Molecular physics

guillefix 11th May 2016 at 12:25pm

Molecular physics is the study of the physical properties of molecules, the chemical bonds between atoms as well as the molecular dynamics.

It is closely related to Atomic physics

Moment of inertia

guillefix 16th July 2016 at 3:45pm

Moments of power laws

guillefix 23rd June 2016 at 11:22pm

As can be shown again by approximating sum by integral, all the moments km\langle k^m \rangleof a power law distribution diverge for m>α1m>\alpha-1. Of course, this is in the limit of an infinite system with the same distribution, in finite systems (as in networks with a finite number of nodes), the moment will of course be finite (for a network, kk will have a maximum value, cutting off the domain of the integral used to calculate CC).

Monoid

guillefix 28th June 2016 at 4:44pm

In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element. They are a Semigroup with identity.

Monomorphic limit (Wright-Fisher model)

guillefix 26th April 2016 at 12:24pm

(See context at the Arrival of the frequent).

Neutral spaces can be astronomically large, much bigger than even the largest viral or bacterial populations (see this paper). In that case, the local neighborhood of the population may not be fully representative of the neighborhood of the entire space.

This scenario can most easily understood in the monomorphic limit: when mutants are rare, NLμ1NL \mu \ll 1

Now, the (average) rate of neutral mutations (per individual) is ν=Lμρ\nu = L \mu \rho, as ρ\rho is the probability that a mutation is neutral.

See more in the Monomorphic limit (Wright-Fisher model) tiddler.

Furthermore, Kimura showed two things relating to fixation (see Population genetics):

  • Probability of fixation. In a population following the Wright-Fisher model in a neutral space (no relative fitnessses), with no mutations, a single allele will eventually fix, and the probability for a particular allele to be the one that fixes is equal to its initial frequency. See the derivation here or here (page 15). For the generalization to non-neutral space see here (page 201). See here too.
  • Mean fixation time. In a population following the Wright-Fisher model in a neutral space (no relative fitnessses), with no mutations, the average time that it takes for a particular allele to fix, given that it fixes, is τ¯(p)=4N(1pp)ln(1p)\bar \tau(p)=-4N\left(\frac{1-p}{p}\right)\ln(1-p), where pp is the initial frequency of the allele. For pp small (0\rightarrow 0), ln(1p)p\ln(1-p) \rightarrow -p, and 1pp1p\frac{1-p}{p} \rightarrow \frac{1}{p}, and so τ¯(p)4N\bar \tau(p) \rightarrow 4N. See this SX question, the original paper by Kimura, here. For another way of deriving it, for the related Moran model see here (page 57).

Now, when mutations are rare enough (that the same mutation occurring twice simultaneously is very unlikely), a mutation will initially just have a frequency p=1/Np = 1/N. This fact, combined with the above results imply two things:

  • The rate of fixations is equal to the rate of (neutral) mutations of an individual. The average rate of mutations in the total population is Nν=NLμρN\nu = N L \mu \rho. As their initial frequency is p=1/Np = 1/N, then they have a probability of fixation 1/N1/N. Then {the rate of {mutations that fix}} is rate at which they appear×probability they fix\text{rate at which they appear} \times \text{probability they fix} == NLμρ(1/N)=LμρN L \mu \rho (1/N) = L \mu \rho, where ρ\rho is probability that a mutation is neutral (otherwise it can't fix as we assume non-neutral mutants have 00 fitness).
  • {The mean fixation time of {a mutation that fixes}} is much smaller than {the mean time to get {a mutation that fixes}}, which we write mathematically as, $$τ¯(p)τm\bar \tau(p) \ll \tau_m. If NN is large, p=1/N1p = 1/N \ll 1, and so τ¯(p)4N\bar \tau(p) \approx 4N. Also the time scale of getting {a mutation that fixes} ({the mean time to get {a mutation that fixes}} would be of the same order, of course) is 1/rate=1/(Lμρ)1/\text{rate} = 1/(L \mu \rho). Their ratio is τ¯(p)τm=4NLμρ1\frac{\bar \tau(p)}{\tau_m} = 4N L \mu \rho \ll 1, by the defining assumptions of the monomorphic limit, and noting that ρ\rho, being a probability is <1<1. See here or here

The second point means that we are in a situation where the population fixes to a particular genotype in Nq\mathcal{N}_q, in the relatively fast time-scale 4N4N, and stays there during the much longer time 1/(Lμρ)1/(L \mu \rho), before it fixes to a new genotype.


+++++++(...)++++++

  • Large population limit
  • Large genome limit

Short term correlations refer to: p-type individuals are being sampled from the same set (the set of p-types in the 1-neighbourhood of the currently fixed q-type genotype which most of the population has) throughout the time that the population is fixed to a particular genotype. When the population (relativelt quickly) transfers to a new genotype, the p-types produced are now sampled from a new set, but still all of them from the same set. The fact that they are sampled from the same set in inter-refixation times (tau_f), means they have correlations that last tau_f in average ("short-term")

If fixations occur much before the set of p-types in 1-neighbourhood is explored, these correlations are no longer observed.

As our evolutionary process is a Markov process, the first discovery time of a neighbour genotype as well as the arrival time of the neutral mutant ‘‘destined’’ to be fixed, are distributed geometrically (or exponentially in a model with continuous time). Thus the mean of these times are equal to the respective standard deviation, and we have large fluctuations.

The geometrical distribution comes about because the Markov property implies that one can define a probability for each of the two events above ({discovery of all neighbour genotype}, and {arrival of the neutral mutant ‘‘destined’’ to be fixed}), and then, each generation corresponds to a Bernoulli trial, and first arrival times follow a geometric distribution. For example, the probability of {arrival of the neutral mutant ‘‘destined’’ to be fixed}, is approximately LμρL \mu \rho (valid when Lμρ1L \mu \rho \ll 1, which we assume. This ultimately comes from the fact that {when the probability of an event is small the average number of times it occurs on a set of trials, is approximately the same as {the probability of it occurring any number of times}}. Essentially 1(1p)N=Np1 - (1 - p)^N \approx = Np when p1p \ll 1. See Probability theory too).

The continuous time approximation: the mean {generation of first success, kk} is fixed to k¯=1/p\bar{k}= 1/p (where pp is the prob. of success in Bernoulli trial). We rescale the time variable as τ=k/N\tau = k/N, and the mean is τf=1/(pN)\tau_f = 1/(pN), where NN is the reciprocal of the time step (i.e. the time we define that a generation lasts). The geometric distribution becomes limNp(1p)k1=1τfN(11τfN)τN1=eτ/τf\lim_{N \rightarrow \infty} p(1-p)^{k-1} = \frac{1}{\tau_f N}(1-\frac{1}{\tau_f N})^{\tau N-1} = e^{-\tau / \tau_f}.

Now, τe=(K1)L/(NLμ)\tau_e = (K-1)L/(NL \mu) is the time scale to find all the 1-neighbour genotypes. If npgn_p^g is the number of mutations that can take gg to a pp. Then, τe/npg=1(npg(K1)L)NLμ\tau_e/n_p^g = \frac{1}{\left(\frac{n_p^g}{(K-1)L}\right)NL \mu} is the time-scale to get a pp mutant from gg. This is because, (npg(K1)L)\left(\frac{n_p^g}{(K-1)L}\right) is the probability that {a mutation from gg leads to pp}. The mean will be of this same order (and I think equal actually). Therefore the time to {first get {a mutation from gg leads to pp}}}, τ1\tau_1, is distributed according to Q(t1)=Nenpgt1/τeQ(t_1) = \mathcal{N} e^{n_p^g t_1/ \tau_e} , where N\mathcal{N} is a normalization constant. Therefore, the {probability to get {a mutation from gg leads to pp}} in the a time τ\tau (the time between two consecutive fixations)} is t1=0t1=τQ(t1)dt1=1enpgτ/τe\int_{t_1=0}^{t_1=\tau} Q(t_1) dt_1 = 1 - e^{n_p^g \tau/ \tau_e}. Integrating over the distribution of τ\tau, we have the probability P(ngp)P(n_g^p) that phenotype pp is discovered before the next neutral fixation:

ξ=τfτe=N(K1)LρNL\xi =\frac{\tau_f}{\tau_e} = \frac{N}{(K-1)L\rho} \approx \frac{N}{L}

For ξ1\xi \gg 1 (large population limit):

  • If npg0n_p^g \neq 0 , P(ngp)1P(n_g^p) \approx 1
  • If npg=0n_p^g = 0 , P(0)0P(0) \approx 0

We can apply a mean-field approximation to the monomorphic limit. Let p(npg)p(n_p^g) be the probability that a genotype gg in Nq\mathcal{N}_q has the given value of npgn_p^g. Then P¯(ngp)p(0)P(0)+p(1)P(1)\bar{P}(n_g^p) \approx p(0) P(0) + p(1) P(1), if we assume p(npg)p(1)p(n_p^g) \ll p(1) for npg>1n_p^g > 1. Then P¯(ngp)(p(0)0+p(1)1)P(1)n¯pqP(1)\bar{P}(n_g^p) \approx (p(0) \cdot 0 + p(1) \cdot 1) P(1) \approx \bar{n}_{pq} P(1), where n¯pq \bar{n}_{pq} is the average of npgn_p^g.

For ξ1\xi \ll 1 (large genome limit), P(ngp)ngpξP(n_g^p) \approx n_g^p \xi . In particular, P(1)ξP(1) \approx \xi. Then P¯(ngp)=n¯gpξ=n¯pqP(1)\bar{P}(n_g^p) = \bar{n}_g^p \xi = \bar{n}_{pq} P(1).

Finally, P(ngp)P(n_g^p) is {the probability that phenotype pp is discovered before the next neutral fixation}, i.e. the probability that the {number of times {[phenotype pp] appears} before the next neutral fixation} is greater than 00, which is approximately the same as {the average number of times [it] appears}, if {{the probability that {[it] appears in one generation}} is small}, which is the case as {in the monomorphic limit, mutants are rare, NLμ1N L \mu \ll 1}. Then, P¯(ngp)\bar{P}(n_g^p) is the average of this quantity, which we use in the mean-field approximation.

Then, following the same derivation as in Polymorphic limit (Wright-Fisher model), we have

T(α)=τflog(1α)P¯(ngp)=τflog(1α)n¯pqP(1)T(\alpha) = \frac{\tau_f log(1-\alpha)}{\bar{P}(n_g^p)} = \frac{\tau_f log(1-\alpha)}{\bar{n}_{pq} P(1)}

where τf\tau_f is the (mean) duration of each "step" (corresponding to going from being fixated to one genotype to another). Now, {the average number of mutations from a genotype in Nq\mathcal{N}_q leading to phenotype pp} can be expressed as n¯pq=(K1)Lϕpq \bar{n}_{pq} = (K-1)L\phi_{pq}, as ϕpq\phi_{pq} is the mean probability that {a single-point mutation from a genotype in Nq\mathcal{N}_q leads to phenotype pp}, and (K1)L(K-1)L is the number of single-point mutations. Now, we can find T(α)T(\alpha) at the two limits of interest:

Monosaccharides

guillefix 8th July 2016 at 5:56pm

More resources in simplicity bias in FSTs

guillefix 13th July 2016 at 8:55pm

See Simplicity bias in finite-state transducers

Statistical properties of FSTs

The graph structure of a deterministic automaton chosen at random

See Random deterministic automata

Origin of bias ideas

Topological trace formula

Random matrix product

Entropy reduction

On the entropy of a hidden Markov process

On Grammars, Complexity, and Information Measures of Biological Macromolecules

Activities and Sensitivities in Boolean Network Models

Complexity

Complexity theoryDescriptional complexity

–>Entropy and complexity of finite sequences as fluctuating quantities

–>Lempel-Ziv complexity analysis of one dimensional cellular automata

Coding Theorems for Individual Sequences . His complexity measure looks very similar to the topological entropy defined here.

http://arxiv.org/pdf/1512.04270v2.pdf . ϵ\epsilon-machines reconstruction or computational mechanics, is a powerful tool in the analysis of complexity, which has been used in a wealth of different theoretical and practical situations.

Lempel-Ziv complexity

Network complexity


To look at

Entropy of Hidden Markov Processes and Connections to Dynamical Systems: Papers from the Banff International Research Station WorkshopCodes, Systems, and Graphical ModelsAn Introduction to Symbolic Dynamics and CodingFundamentals of Codes, Graphs, and Iterative DecodingTopological Entropy and Equivalence of Dynamical SystemsSymbolic Dynamics and Its ApplicationsErgodic Theory and Topological Dynamics of Group Actions on Homogeneous SpacesSubstitutions in Dynamics, Arithmetics and CombinatoricsCombinatorics on WordsFractal Geometry, Complex Dimensions and Zeta FunctionsDynamics and Randomness

Resolving Markov Chains Onto Bernoulli Shifts Via Positive Polynomials

Complexity of strings in the class of Markov sourcescitations

On the entropy of a hidden Markov processcitations

lempel ziv complexity finite state channel lempel ziv complexity markov model

Check what these are! epsilon machines

Random matrix product

Capacity of finite state channels based on Lyapunov exponents of random matrices

Basic properties of the projective product with application to products of column-allowable

Morphogenesis

guillefix 8th March 2016 at 6:01pm

https://en.wikipedia.org/wiki/Morphogenesis

"How the tiger got its stripes."

Turing foundational paper

Reaction-diffusion equations

Computer simulation of reaction-diffusion equations

Simulation in Matlab

Xmorphia Nice exploration of the Gray-Scott reaction-difussion DE.

http://mrob.com/pub/comp/xmorphia/pde-uc-classes.html

Motivation

guillefix 26th June 2016 at 2:51pm

Movements

guillefix 4th February 2016 at 9:45pm

Movements of Earth

guillefix 5th July 2016 at 3:16am

Mpemba effect

guillefix 10th May 2016 at 10:56pm

Multi-instance learning

guillefix 9th July 2016 at 4:26am

See Deep learning

Max-margin learning, transfer and memory networks.

good for generalizing models, transfer learning, multi-task learning. Good when don't have much supervision data.

Max-margin: learning a function that identifies sensible data (e.g. sentences that make sense), thats what we do with the algorithm he explains of finding a prob dist bigger at the data points that "anywhere" else. This will, in particular, make the NN learn a good representation of the data, or embedding. For this we use hinge loss. In practice, we do this

Learn embeddings in one task and transfer these to solve new tasks

Example. He exaplains how deep multi-instance learning works. Nice

Matching

Corruption

Example: Bi-lingual word embeddings

When you can't corrupt the data: Siamese networks Paper

Example: Question answering system. Followed by relation learning (learning triplets like "cat eats mouse")

memory networks (see below) may be useful for transfer learning too..

One-shot learning using conv nets, as we've already have good embeddings, just compare objects in embeddings. See beginning of this

Multilayer networks

guillefix 27th May 2016 at 6:13pm

Review paper: https://arxiv.org/abs/1309.7233

When a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Such systems include multiple subsystems and layers of connectivity.

The structure and dynamics of multilayer networks

Some types

See paper for details:

Multilayer Networks Library for Python (Pymnet)

Multiplex networks

guillefix 27th May 2016 at 6:10pm

A network in layers, and with connections between layers; the interconnections between layers are only between a node and its counterpart in the other layer (i.e. the same node).

http://cosnet.bifi.es/network-theory/multiplex-networks/

http://people.maths.ox.ac.uk/~lee/slides_Arenas.pdf

Multithreading

guillefix 30th June 2016 at 2:01am

See Concurrent computing

Introduction to Processes & Threads

Processes are divided into threads, that each has their own Call stack, but which however, share the memory (owned by the process). This can make programs more efficient. For instance, microsoft word may be a single process. However, it may have a thread for reading input, and one for writting to files, and another one for printing to screen. Concurrent programming designs the program so that these threads may be running for the duration of the process, instead of switching between them. This abstraction of concurrent threads allows for easier design of many large programs. However, it creates some challenges to keep synchronized execution, so that actions between different threads don't mix up.

For instance, a thread may begin writting ot some object in memory, and the scheduler switches to a different thread, which now begins to write to that object. The result of this may not be as desired, if one didn't take this possibility into account..

A thread that is independent can be called a deamon..

Music

guillefix 21st July 2016 at 1:32am

Music theory

guillefix 9th March 2016 at 5:02pm

See Human hearing.

Equal temperment and just intonation:

https://www.youtube.com/watch?v=VRlp-OH0OEA

See also xenharmonic music..

https://musiclab.chromeexperiments.com/Technology

Mutational robustness

guillefix 13th April 2016 at 1:05pm

See Population genetics

Neutral evolution of mutational robustness

In evolution of ribozymes in vitro, mutations responsible for an increase in fitness are only a small minority of the total number of accepted mutations (see Continuous in vitro evolution of catalytic function.). This fact indicates that, even in adaptive evolution, the majority of point mutations is neutral. This is the basis of Kimura's neutral theory of evolution, see the paper.

A neutral network is a collection of mutually neutral genotypes (i.e. producing the same phenotype, whether structure or function), which are connected via single mutational steps; they sometimes form extended networks that permeate large regions of genotype space. A population is mutationally robust (insensitive against mutations) when it inhabits a highly interconnected region of the network so that most mutations lead to the same neutral network, thus leaving the phenotype unchanged.

In Neutral evolution of mutational robustness, the authors found analytically, that for a range of population sizes and mutation rates of biological interest, the population's distribution over a neutral network is determined solely by the network's topology.

Mutual information

guillefix 5th July 2016 at 12:35pm

In Information theory, the mutual information between Random variables XX, and YY is defined as:

I(X;Y)=x,yp(x,y)logp(x,y)p(x)p(y)=Elogp(X,Y)p(X)p(Y)I(X;Y) = \sum_{x,y} p(x,y) \log{\frac{p(x,y)}{p(x)p(y)}} = E \log{\frac{p(X,Y)}{p(X)p(Y)}}

where EE denotes expectation. The mutual information measures the amount of information we obtain about XX by knowing YY (see result below).

video

The mutual information between a random variable and itself is equal to its entropy

Some results (video):

I(X;Y)=H(X)H(XY)=H(Y)H(YX)I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X)

H(XY)H(X|Y) is the Conditional entropy and thus gives you the information about XX that YY doesn't give you.

Nano 3D printer

guillefix 26th February 2016 at 12:43pm

Eric Drexler's scheme

  • Structural support. Large slabs of rigid protein (manufactured by self-assembly). Or out of DNA?
  • Stepper motors.
    • Three steps (phases) to be able to go back and forth.
      • Actuators are molecules (like azobenzenes).
      • Control by three wavelength (visible; what is problem with UV ones? Not enough non-overlapping wavelengths I suppose was the problem).
      • Nanosecond response times.
  • Material transport by diffusion.
  • Material deposition by tip that activates site (catalytic).
    • To avoid a significant error rate by thermal fluctuations:
      • "large" structural components
      • larger molecular building blocks.

Oxford as one center to develop this!?

Questions:

Exactly how do the blocks hold together? The stepper motor surfaces should be designed to have modulatable attractice potentials, I imagine.

Predict the stochastic motion of these. How large do surfaces have to be? How fast will they move?

Slides

Talk at Martin School on Jan 2016

Notes from talk

Protein engineering

Extended structures:

Design of ordered two-dimensional arrays mediated by noncovalent protein-protein interfaces

Compact structures:

Accurate design of co-assembling multi-component protein nanomaterials

Photoswitching

http://pubs.rsc.org/en/content/articlelanding/cc/2013/c3cc46045b#!divAbstract . Allows more wavelengths to be used, including red light which is penetrates skin more.

http://onlinelibrary.wiley.com/doi/10.1002/anie.201207602/abstract . Can now switch at nanosecond rates.


Key Challenge: Coordinated cross-disciplinary development

DOE workshop. Oxford, etc.

nano_3d_printer.png

guillefix 18th February 2016 at 2:55am

nano_3D_printer2.png

guillefix 18th February 2016 at 3:12am

Nanoengineering

guillefix 29th June 2016 at 6:47pm

Nanomedicine

guillefix 29th June 2016 at 6:45pm

Nano-drugs delivery systems

Only 3% of medicines often reach target. Particularly bad with cancer.. We need smart nanosystems. Problem with aggregiation. Use circular DNA wrapped around carbon nanotubes makes it stable in many solution media. Target only cancers with molecule added to DNA. Add fluerescent moleucle for diagnostic. Then theragnostics, both therapeutics and dignosis on spot.

Can use the wrapping of DNA for other nanoparticles too!

Nanotoxicity and nanowaste are important problems too.

A Logic-Gated Nanorobot for Targeted Transport of Molecular Payloads

Nano-sensors and diagnostics

Detect causes of diseases before they are pathological.

Nano-devices for medical surgery

Nano-needle. Very non-aggresive.

Nanotechnology for regenerative medicine

3D-printing with cells. We need extracellular matrix (scaffold) to guide stem cells as they develop (in particular when they stop growing).


Stability of DNA nanomachines in cellular environment

One of the main challenges in DNA-based nanomedicine! One of the main arguments for approaching nanomedicine with some non-organic materials.

Addressing the Instability of DNA Nanostructures in Tissue Culture

Stability of DNA Origami Nanoarrays in Cell Lysate

Nanotechnology

guillefix 29th June 2016 at 6:36pm

Natural language processing

guillefix 27th June 2016 at 2:56pm

Nearest-neighbour classification

guillefix 9th July 2016 at 3:59am

Nearest-neighbor methods: To get the prediction Ŷ for a point xx, use [those observations (kk of them) in the training set T, closest in input space to point x]. Remember training set is a set of pairs (x,y)(x,y). Closest often refers to Euclidean distance.

It turns out that the effective number of parameters of k-nearest neighbors is N/kN/k, even if technically there is only one parameter, kk.

–> To me it seems more like a method in Nonparametric statistics! Indeed it is (see Wiki).

Negative osmosis

guillefix 2nd July 2016 at 9:06pm

When a flow of liquid occurs through a membrane from a more concentrated solution to a more dilute solution, it is designated as negative osmosis. Compare with (positive) Osmosis

See Physical mechanisms of osmosis

See here: http://physics.stackexchange.com/questions/265752/can-osmosis-go-the-other-way/265879#265879

MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES

Many standard theoretical calculations of equilibrium osmotic pressure work under very ideal/simplifying assumptions; like the lattice model in Physical biology of the cell book.

Similarly treatments of osmotic flow are often simplified see for instance The solution-diffusion model: a review.

Real life application of osmotic flow need more complete descriptions, which include parameters, which are often measured, mainly the reflection coefficient. Results can often differ (often just quantiatively) from more naive thermodynamic treatments.

When you try to find these parameters theoretically is when you get into the hard part, as a microscopic model is needed, whether based on kinetics, or hydrodynamics. This is where the richness of real-life phenomena comes to light.

Careful theoretical treatment has found negative reflection coefficients to be possible: MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES, Diffusioosmosis of nonelectrolyte solutions in a fibrous medium However, I think they are only possible for non-perfectly-semipermeable membranes! See below. Actually Anderson's paper agrees! Note that in his figure 5, negative reflection coefficient is found only when the solute is smaller than the pore!

Configurational effect on the reflection coefficient for rigid solutes in capillary pores

In the case of osmosis of electrolytes, there's more studies: Charge-Mosaic Membranes: Enhanced Permeability and Negative Osmosis with a Symmetrical Salt

Diffusioosmosis of Electrolyte Solutions in a Fine Capillary Tube

Anomalous osmosis and salt concentration dependence of the reflection coefficient in charged membranes

Although regular osmosis looks at semi-impermeable membranes, similar diffusio-osmotic effects can be studied for membranes where both solute and solvent can go through the pores:

Osmotic Flow through Fully Permeable Nanochannels

Drastic alteration of diffusioosmosis due to steric effects

Kinetics and thermodynamics across single-file pores: Solute permeability and rectified osmosis (only find negative reflection coefficient (negative diffusion) when the membrane is permeable to solute as well).


Experimental measurements of negative osmosis

NEGATIVE REFLECTION COEEFICIENTS

Entropy-Driven Pumping in Zeolites and Biological Channels (finds negative osmosis, only when the membrane is permeable to both species)

Binary Diffusion and Bulk Flow through a Potential‐Energy Profile: A Kinetic Basis for the Thermodynamic Equations of Flow through Membranes (finds negative osmosis, only when the membrane is permeable to both species)

Nonequilibrium thermodynamics in biophysics book by Katzir-Katchalsky, Aharon. | Curran, Peter F (in Maths Inst library!)

An Experimental Study of Negative Osmosis


Anomalous effects during electrolyte osmosis across charged porous membranes

Osmosis and reverse osmosis in fine-porous charged diaphragms and membranes

OSMOTIC PRESSURE, ROOT PRESSURE, AND EXUDATION

Osmotic properties of polyelectrolyte membranes: positive and negative osmosis

Theoretical calculation of reflection coefficients of single salt solutions through charged porous membranes

Neighbourhood space

guillefix 14th July 2016 at 2:52am

A neighbourhood space is a weaker notion than a Topological space. It is a Set with a Neighbourhood structure

See Csazar 1978

Neighbourhood structure

guillefix 14th July 2016 at 2:51am

A neighbourhood structure N\mathcal{N} on a set XX is an assignment to each xXx \in X of a filter N(x)\mathcal{N}(x) on XX all of whose elements contain the point xx.

Network

guillefix 13th July 2016 at 8:54pm

Network analysis software

guillefix 8th May 2016 at 10:19pm

Network complexity

guillefix 15th July 2016 at 8:44pm

Measures of Complexity of a Graph or Network

Quantitative Measures of Network Complexity

What is a complex graph?

Algorithmic complexity of a graph

Correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks They show that: Kolmogorov complexity can capture group-theoretic and topological properties of abstract and empirical networks, ranging from metabolic to social networks, to small synthetic networks.

We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block decomposition method (BDM) based on algorithmic probability theory.

Complexity and edge density

Complexity is minimal for empty or complete graphs

Kolmogorov Random Graphs and the Incompressibility Method

Complexity vs symmetry of the graph

The symmetry is measured by the cardinality of the Graph automorphism group. The following plot from empirical complex networks shows that they are indeed negatively correlated. The graph automorphism is normalized, and NBDM refers to the normalized BDM.

Information content in a graph

Entropy and the Complexity of Graphs Revisited

Information Content of Colored Motifs in Complex Networks

Symmetry of grahs

Emergence of symmetry in complex networks

Network quotients: Structural skeletons of complex systems

Network science

guillefix 13th July 2016 at 9:02pm

MMathPhys course mostly about Network Theory

Books

Networks: An introduction - Newman

See books by Barabasi et al., it has nice ones.

Other research articles

Check wikipedia network science portal and others resources..

~my problem sheets here~

Oxford course website and blog.

Some important classes of networks:

Apollonian networks


Empirical study of Networks


Fundamentals of network theory

Mathematics of networks

Measures and metrics for networks

Large-scale structure of networks


Computer algorithms

Basic concepts of algorithms

Fundamental network algorithms

Matrix algorithms and graph partitioning


Network models

Random graphs

Random graphs with general degree distributions

Models of network formation

Other network models:


Processes on networks

Percolation and network resilience.

Epidemics on networks

Dynamical systems on networks

Network search


Further network measures and analytics

Community structure in networks

Network complexity


Review articles

Statistical mechanics of complex networks

Complex networks: Structure and dynamics

Others

Many many good references here on random and evolving networks: http://www.fzu.cz/~slanina/bookmark_files/bkm3-1.html

Multilayer networks

Multiplexing

Like air traffic, information flows through neuron 'hubs' in the brain, finds IU study

Network theory

guillefix 8th May 2016 at 10:19pm

network_scatter_plot.png

guillefix 13th February 2016 at 5:38pm

network2.PNG

network3.PNG

Networks data sets

guillefix 8th May 2016 at 10:19pm

Neural networks with memory

guillefix 9th July 2016 at 4:20am

Memory is good for recognizing time sequence data.

Memory networks. Apply max-margin. Actual drescription. Paper

Time constraints for facts

Recurrent neural nets. Vanishing gradient problem, naively, RNNs don't give you long term memory.. RNNs

Long Short-Term Memory (LSTM) was introduced to solve this problem.

Neuroanatomy

guillefix 8th July 2016 at 2:23am

Neurodegeneration

guillefix 5th July 2016 at 3:10am

Neuromorphic computing

guillefix 23rd June 2016 at 11:16pm

Computing systems that imitate the working of Neuronal networks, at hardware and/or software level. A basic model is the Spiking neural network. One advantage is that they tend to be more energy-efficient.

Numenta

IBM TrueNorth.

Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

This is direct evidence that an “integrate-and-spike” mechanism has the similar computational capability as the more proven ANNs. The IBM paper however highlighted one major weakness of SNN. That is, training of the TrueNorth system required simulation of back-propagation using another conventional GPU:

Training was performed offline on conventional GPUs, using a library of custom training layers built upon functions from the MatConvNet toolbox. Network specication and training complexity using these layers is on par with standard deep learning.

See more interesting stuff here: Microglia: A Biologically Plausible Basis for Back-Propagation

There however has been no biological evidence of a structural mechanism of “back-propagation” in biological brains. Yoshua Bengio published a paper in 2015 (see: http://arxiv.org/abs/1502.04156 ) “Towards Biologically Plausible Deep Learning”. The investigation attempts to explain a mechanism for back-propagation exists in Spike-Timing-Dependent Plasticity (STDP) of biological neurons.

It is however questionable whether neurons are able to learn by themselves without the need of an external feedback pathway that spans multiple layers.

There is however an alternative mechanism that recently has been discovered that may be a more convincing argument that is based on a structure that is independent of the brain’s neurons. There is a large class of cells in the Brain called Microglia ( see: https://www.technologyreview.com/s/601137/the-rogue-immune-cells-that-wreck-the-brain ) that are responsible for regulating the neurons and their connectivity.

In summary, biological brains have a regulatory mechanism in the form of microglia that are highly dynamic in regulating synapse connectivity and pruning neural growth. The activity is most pronounced during sleep. SNNs have been shown to have inference capabilities equivalent to Convolution Networks. SNNs however have not shown to effectively learn on their own without a ‘back-propagation’ mechanism. This mechanism is most plausibly provided by the microglia.

neuron.jpg

Neuronal network

guillefix 8th May 2016 at 10:18pm

Neuroscience

guillefix 24th June 2016 at 2:04am

Neutral theory of evolution

guillefix 26th April 2016 at 7:20pm

Kimura's neutral theory of evolution. He proposed that (at least for molecular evolution) most mutations are neutral, meaning that they don't lead to a change in fitness.

Because different phenotypes often do have different fitness, the way this comes about is because of the large redundancy in GP maps.

When the redundancy is large enough for some phenotype, or there are genetic correlations, so that nearby genotypes (in mutation network) tend to map to the same phenotype, we find large neutral spaces. If Kimura is right, most mutations occurs within these spaces, and are governed by genetic drift, random changes in allele frequencies in finite populations, not governed by natural selection.

Genotype space, links represent single-point mutations. It has a hypercube network structure.

Neutral spaces or neutral sets are those sets of genotypes that produce the same phenotype. These are important in the ideas of the Arrival of the frequent above which relies on the many-to-one nature of the GPM. It seems to also be related to the Survival of the flattest.

Evolution explores neutral space, being exposed to larger number of neighbouring possibilities, before switching to a different, better, phenotype

Wiki article

See Monomorphic limit (Wright-Fisher model)

Presentation about genetic drift

Founder effect is the loss of genetic variation that occurs when a new population is established by a very small number of individuals from a larger population.

Neutral evolution of mutational robustness

https://en.wikipedia.org/wiki/Molecular_clock

The Molecular Clock Hypothesis: Biochemical Evolution, Genetic Differentiation and Systematics

Smoothness within ruggedness: The role of neutrality in adaptation

New advances in deep learning

guillefix 17th June 2016 at 7:06pm

New Tiddler

guillefix 17th January 2016 at 7:35pm

News and real time data

guillefix 6th April 2016 at 12:35am

Non-equilibrium statistical physics

guillefix 2nd July 2016 at 7:12pm

Resources

Lecture series by Balakrishnan!

Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes)

Statistical physics -- a second course

A Kinetic View of Statistical Physics, P.L. Krapivsky, S. Redner, E. Ben-Naim

Stochastic Processes in Physics and Chemistry, N. van Kampen

Handbook of stochastic methods - Gardiner


Non-equilibrium Statistical Physics is the branch of Statistical physics that deals with systems out of equilibrium, so that averages can change in time (Actually not quite: see Thermodynamic equilibrium). This is much harder to do in full generality, as systems offer much more diversity out of equilibrium, as may be expected. As said in that page, one often has three approaches:

  • For a small system coupled to a large chaotic system, one has a stochastic process, which describes the evolution under the random influence of the large chaotic system.
  • For a large system which is only slightly out of equilibrium, so that relevant macroscopic averages analogous to those used in thermodynamics can still be defined, one can describe the system using Non-equilibrium thermodynamics
  • For a large system that is considerably out of equilibrium, one has to use the tools of Kinetic theory to describe it (or newer approaches, see below). However, if the system is very far from equilibrium, even these may be inappropriate, and finding an appropriate description may be extremely hard. An example of this are systems with strong Turbulence. Our only approaches to understand these systems are often phenomenological.

Related things (also in the course/exam):


Other aspects and approaches, some of which have lead to the recent understanding of systems very (even arbitrarily) far from equilibrium). In approaximate chronological order:

See overview in this lecture

Large deviation theory. Einstein formula (1908)

Application of large deviation theory to dynamics: Onsager (1931)

Projector operator method

linear response theory

Nonlinear response theory Nonlinear Response Theory Nonlinear projection operator method Zwanzig projection operator

Nonequilibrium Equality for Free Energy Differences Jarzynski 1997. Discussed on lecture 3 of Shin-ichi

Fluctuation theorem Fluctuation Theorems Fluctuation theorems, or fluctuation relations, which have been developed over the past 15 years, have resulted in fundamental breakthroughs in our understanding of how irreversibility emerges from reversible dynamics, and have provided new statistical mechanical relationships for free energy changes. They describe the statistical fluctuations in time-averaged properties of many-particle systems such as fluids driven to nonequilibrium states, and provide some of the very few analytical expressions that describe nonequilibrium states. Quantitative predictions on fluctuations in small systems that are monitored over short periods can also be made, and therefore the fluctuation theorems allow thermodynamic concepts to be extended to apply to finite systems. For this reason, fluctuation theorems are anticipated to play an important role in the design of nanotechnological devices and in understanding biological processes. These theorems, their physical significance and results for experimental and model systems are discussed.

Shin-ichi calls them identities, and explains them on Lecture 4

Stochastic thermodynamics Focus on Stochastic Thermodynamics Stochastic thermodynamics has emerged as a framework for describing small driven systems using thermodynamic notions on the level of individual fluctuating trajectories. Topics on the article:

  • Stochastic energetics and entropy production along trajectories.
  • Non-equilibrium work and fluctuation-(dissipation)-relations.
  • Efficiency and efficiency at maximum power of heat engines, thermoelectrics and isothermal machines.
  • Thermodynamics of molecular motors.
  • Dissipation, irreversibility and information.
  • Thermodynamics of molecular and cellular information processing.

Stochastic thermodynamics: A brief introduction

Figure. Artistic view of a driven molecular motor-cargo complex with its trajectories. Image by Daniel Schmidt, University Stuttgart.

See also new advancements mentioned in the article on the new theory on the origin of life (linked in Abiogenesis)

Shin-ichi Sasa publications

On Various Questions in Nonequilibrium Statistical Mechanics Relating to Swarms and Fluid Flow

Read Ilya's book on thermodynamics, where he covers the non-equilibrium part. Also his book on Self-organization, and other books on non-equilibrium statistical physics. See if then I can get a more clear derivation of the Allen-Cahn and Cahn-Hilliard equations in Phase transition, describing general forms of diffusion and phase field evolution.

See also Complex systems, which are often analysed using ideas from nonequilibrium statistical physics.

Fluctuations in nonequilibrium statistical mechanics. One project is about rare event simulations, non-Markovian extensions of large deviation theory, and zero-range processes (Harris, Touchette). A second one is about random packing optimization problems, which have very different solutions depending on the shape of the objects (Baule).

http://www.research.ed.ac.uk/portal/en/persons/martin-evans(2f8bc4da-9178-4a62-ad41-059a612018c6).html

Stochastic Thermodynamics in Biology

Thermodynamic Costs in Implementing Optimal EstimatorsKalman filter Dynamics of protein synthesis: transcription, translation, and mRNA degradation Simple models of evolution with selection and genealogies Universal constraints for biomolecular systems Stochastic Thermodynamics of Chemical Networks

Stochastic approaches in systems biology. See Systems biology

Stochastic thermodynamics of Langevin systems under time-delayed feedback control: Second-law-like inequalities

Stochastic thermodynamics, fluctuation theorems and molecular machines

Video lecture! Udo Seifert - Stochastic thermodynamics 1 lecture series (school on thermalization)

Portrait of Udo Seifert

More literature on stochastic thermodynamics

Martin Z. Bazant Chemical Kinetics in Nonequilibrium Thermodynamics - Martin Z. Bazant

Prof. Dr. Udo Seifert

Introduction to stochastic thermodynamics: (prof. dr. M. Esposito) Part1

The stochastic thermodynamics of a rotating Brownian particle in a gradient flow:

See stuff in here: MMathPhys Condensed Matter and Astrophysics/Plasma Physics/Physics of Continuous Media Strands Short Syllabi

1. Dynamics of Stochastic Processes (12 lectures) • Langevin equation and mean-squared displacement versus time, fundamentals of Molecular Dynamics and Stochastic Rotation Dynamics simulation methods • Probabilistic description of stochastic process, Fokker-Planck equation • Kramers rate theory, escape probability and first-passage time • Master equation, equilibrium and detailed-balance, fundamentals of Monte Carlo simulation method, chemical reactions, one-step processes (traffic models), fundamentals of Lattice Boltzmann simulation method • Diffusion-reaction processes and pattern formation • Heterogeneous catalysis and the Michaelis-Menten rule in enzymatic reactions • Rectification of stochastic motion and Brownian ratchets

2. Fluctuations and Response (4 lectures) • Equilibrium fluctuations, correlation functions • Density fluctuations, hydrodynamic fluctuations and the long-time tail • Linear response theory, response function, causality and Kramers-Kronig relations • Fluctuation-dissipation theorem near equilibrium • Small-system (stochastic) thermodynamics, Jarzynskii inequality • Generalised fluctuation-dissipation theorem in nonequilibrium systems

https://scholar.google.co.uk/citations?hl=en&user=1V6ZcgMAAAAJ&view_op=list_works&sortby=pubdate

Viewpoint: Debut of a hot “fantastic voyager”

Open-System Nonequilibrium Steady State:  Statistical Thermodynamics, Fluctuations, and Chemical Oscillations

Information in Biological Systems and the Fluctuation Theorem

Stochastic thermodynamics with information reservoirs

Non-equilibrium statistical physics (long lecture series)

Introduction to macroscopic fluctuation theory by Giovanni Jona Lasinio

Foundations of Synergetics II: Chaos and Noise

See Biophysics

Non-equilibrium statistical mechanics: from a paradigmatic model to biological transport

Non-self-averaging percolation process

guillefix 13th June 2016 at 8:08pm

A type of Percolation process that is non-self-averaging, (often? def. of self-averaging?) in the sense that the relative variance of the size of the largest component doesn't vanish in the thermodynamic limit.

See also: Achlioptas processes are not always self-averagingPhase transitions in supercritical explosive percolationUnstable supercritical discontinuous percolation transitions

Discontinuity in k-vertex rule percolation processes

O. Riordan and L. Warnke showed that k-vertex rule percolation process were continuous, however, it is equally true that certain percolation processes based on picking a fixed number of random vertices are discontinuous. A paradox resolved in this paper, where they show that some processes, while continuous at exactly the transition point, still exhibit infinitely many discontinuous jumps in an arbitrary vicinity of the transition point: a Devil’s staircase.

This staircase is in fact stochastic, as the jump points and sizes are stochastic random variables. This stochasticity is present even in the thermodynamic limit, and that is what gives rise to the non-self-averaging property.

Nonlinear continuous dynamical system

guillefix 8th July 2016 at 5:30pm

Continuous dynamical systems are systems of 1st order O.D.Es. Linear dynamical systems (O.D.E.s linear) are easy to analyze, and can be analyzed by looking at the eigenvalues of the Jacobian.

Nonlinear continuous dynamical systems are those where the O.D.Es are nonlinear. They offer much richer behavior and thus require more variety in analysis techniques. Locally, however, they can be linearized and analyzed by the same linear Jacobian techniques

Autonomous systems are those that don't have explicit time dependence.

Phase portrait features and attractors

Attractors are regions of phase space to which points converge if they begin within a given basin of attraction

Features:

  • Equilibrium points points
  • Trapping regions

Only in 2+ D:

  • Nullclines, lines where the derivative of one of the variable becomes zero.
  • Limit cycles
  • Isoclines, lines of equal inclination (equal slope of tangent to trajectory).

Only in 3+ D:

Equilibrium points

A.k.a. fixed points.

They can be classified by their stability and other qualitative features. See Classification of equilibria in 2D.

The classification is done by computing the Jacobian matrix at the fixed point, and looking at the eigenvalues and eigenvectors to see how the flow behaves localy:

More on stability

Poincare-Bendixson theorem, trapping regions. Useful to prove existence of limit cycles in 2D, also makes chaos impossible in 2D. Need at least 3D!.

Classifications

Conservative systems:

  • Phase space volume conserved (Lioville's theorem)
  • There is an energy function (that has to be independent of time, I think) that doesn't change with time.

Non-conservative systems:

  • (Strong) Lyapunov function (also non explicitely dependent on time, I think) that is monotone decreasing or increasing with time in some region, except at an equilibrium point. This region is called the region of asymptotic stability for the equilibrium point. Can generalize so that it's only required to be monotone changing in a region enclosing a trapping region. Can also have weak Lyapunov functions which are not required to be monotone (i.e. V˙<0\dot{V}<0) but only V˙0\dot{V}\leq 0.

Bifurcation theory

1-dimensional flows

Bifurcations

  • Saddle-node bifurcation (a.k.a fold), stable and unstable nodes collide and anihilate each other.
  • Transcritical bifurcation: stable and unstable node collide and interchange stability
  • Pitchfork bifurcation: stable (unstable) node becomes unstable (stable) while two stable (unstable) nodes appear, for a supercritical (subcritical).
    • supercritical
    • subcritical

2-dimensional flows

  • Same as in 1D
  • Hopf bifurcation. Hopf Bifurcations in 2D
  • Saddle-node bifurcation of limit-cycles
  • Global bifurcations:
    • Homoclynic and heteroclynic bifurcations.

3-dimensional flows

  • Same as in 2D
  • Cycle bifurcations
    • Fold bifurcation (saddle-node for cycles)
    • Flip bifurcation (for supercritical: stable cycle becomes unstable and gives rise to a cycle of twice the period, often a Moebius strip, so it requires 3D)
    • Neimark bifurcation, or secondary Hopf bifurcation: a stable limit cycle becomes unstable and a limit cycle around the limit cycle forms: i.e. a trajectory that lives in a torus.
  • Global bifurcations (play important role in route to chaos). See pages 259+ in Thompson
    • Intermittency and mode-locking. Intermittency catastrophe Intermittent between two connected attracting regions in an attractor, that become separate attractors at the catastrophic (discontinuous) bifurcation. Just before bifurcation there are still jumps between the still connected attracting regions that technically still become belong to the same attractor. These jumps become less and less frequent. If one of the attracting regions becomes more and more transient (see example of drift ring and saddle-node in Thompson p 259) then we only have one attractor after the bifurcation. This kind of catastrophe is observed in maps. However, it is also possible in flows and is then called a Omega explosion. See Fig. 13.3 below.
    • Hysteresis and blue sky catastrophe. Blue sky catastrophe refers to the global bifurcation in which a limit cycle disappears discontinuously (at a given control parameter value), when it collides with a saddle equilibrium point.

  • Bifurcations of chaotic attractors. Routes to chaos

Other concepts:

Global bifurcations, bifurcations that are not identified by a change localized close to a limit point or cycle. These occur when there is a qualitative change in the topology of invariant manifolds, or in the topology of basins. Global bifurcations can be accompanied, or even caused by local bifurcations.

Poincare section, snapshots of phase space of a dynamical system define a map, so that we can use the theory of nonlinear maps, to analyze, for example, chaotic attractors.

Structural stability refers to when a certain qualitative feature (like a type of bifurcation) isn't changed by small perturbation of the equation, by which we mean, addition of small extra terms to the equation.

Catastrophe theory studies bifurcations and other qualitative phenomena as control parameters are varied.

One can also distinguish: discontinuous (or catastrophic) vs continuous bifurcations. See page 252 of Thompsons book. See also page 257 of that book; one distinguishes safe boundaries, dangerous boundaries.


Examples:

Duffing oscillator

Josephson junction Revise this


Books:

Strogatz

Thompson and Stewart. Nonlinear dynamics and chaos very good


http://www.scholarpedia.org/article/Canards

http://www.math.harvard.edu/library/sternberg/

Nonlinear map

guillefix 8th July 2016 at 5:31pm

Oxford notes

Discrete-time dynamical systems are sometimes called maps. As usual, there are linear maps, which cam be represented by a matrix (plus a constant vector, if the map is affine, instead of just linear). However, most interesting behaviour is observed in nonlinear maps, in which the state at discrete time n+1n+1 depends on the state at the previous time via a nonlinear function ff:

xn+1=f(xn;n)x_{n+1} = f(x_n ; n)

where we allow discrete-time dependence of ff. Autonomous maps, won't have such dependence.


Poincare maps

Cross-sections of the phase plane of a Continuous dynamical system that are nowhere tangential to a trajectory are called Poincare sections. Trajectories become points in the lower dimensional space of the cross-section, and the dynamical system becomes a discrete map, called the Poincare map.


Features of maps

The equivalent of equilibrium points in dynamical systems are fixed points. A fixed point is one that is mapped to itself.

Periodic cycles are closed orbits (like limit cycles, or orbits, in dynamical systems).

Stability

The stability of a fixed a fixed point is determined by its multiplier (which is just the derivative of the function defining the map λ=fx\lambda = \frac{\partial f}{\partial x}) evaluated at the fixed point. A point is stable if λ<1|\lambda| <1, unstable if λ>1|\lambda| >1, and neutrally stable if λ=1|\lambda| =1 (at which point a bifurcation occurs).

One can use the Jury test to find if the roots of a polynomial are inside the unit circle, which is useful for stability.

The stability of a periodic cycle can be found by multiplying the multiplier evaluated at each of the points in the cycle. These numbers are then called the characteristic multipliers or Floquet exponents.

Bifurcations in 1D maps

  • Fold bifurcations. Bifurcations that occur when the multiplier λ=1\lambda =1. Can be:
    • Saddle-node
    • Transcritical
    • Pitchfork
  • Flip or period-doubling bifurcation. Occur when the multiplier λ=1\lambda = -1
  • Hopf bifurcation. Occur when the multiplier λ=eiθ\lambda = e^{i\theta} (for a θ0,π\theta \neq 0, \pi, I suppose).

One can also have bifurcations of periodic cycles in 2D maps, I think.

There are also global bifurcations in periodic maps, some of which are routes that lead to chaos. See Nonlinear dynamical systems and Chaos theory.

2D maps

Local linear stability analysis done by Jacobians, and multipliers are replaced by the Jacobian's eigenvalues which must now be less than one in magnitude for stability. For periodic cycles, one multiples the Jacobians.

Another very interesting feature of nonlinear maps, is that many of them exhibit chaos.


Examples

Henon map

Standard (or Chirikov) map

See more examples of chaotic maps in Chaos theory

Nonlinear oscillations

guillefix 16th March 2016 at 8:08pm

Can analyze using Perturbation methods. In particular:

  • Poincaré- Lindstedt method. Letting dependent variable xx depend on the independent variable tt via terms of all orders i.e. tt, ϵt\epsilon t, ϵ2t\epsilon^2 t, etc. However, these term are forced to only appear together, with the form: t+ϵω1t+ϵ2ω2t+...t+ \epsilon \omega_1 t + \epsilon^2 \omega_2 t +.... That is, we just expand the frequency as a perturbation series. A consequence is that constants of integration are not allowed to depend on tt at any order.
  • Method of Multiple Scales. We allow dependent variable xx depend on the independent variable tt via terms of all orders i.e. tt, ϵt\epsilon t, ϵ2t\epsilon^2 t, etc., without any constraint. A consequence is that we can treat these as independent variables, or scales. "Constants" of integration can now depend on them (i.e. on the slower scales than the scale corresponding to the order considered)
  • Krylov-Bogoliubov Method of Averaging. Assumes that solution, x(t)x(t) has the form that it has when ϵ=0\epsilon=0 but the constants of integration can now change with time. As we have two arbitrary time-dependent functions (in the case of a second order ode), we are clearly underdetermining the problem. Therefore Krylov and Bogoliubov added the constraint that the time derivative of x(t)x(t), i.e. x˙(t)\dot{x}(t), also hase the same form as when ϵ=0\epsilon=0 with the same constants of integration upgraded to the same time-dependent functions.

For example (for the method of averaging), if x(t)=acos(t+θ)x(t)=a \cos{(t+\theta)} is the ϵ=0\epsilon=0 solution, then we require:

  • x(t)=a(t)cos(t+θ(t))x(t)=a(t) \cos{(t+\theta(t))}, and
  • x˙(t)=a(t)sin(t+θ(t))\dot{x}(t)=-a(t) \sin{(t+\theta(t))}

Duffing oscillator

Van der Pol oscillator. Paper about it's periodic solutions Apply method of multiple scales.

Relaxation oscillations and transition layers

As an example consider the van der Pol eq. with the nonlinear term very large (Λ1\Lambda \gg1), instead of very small.

We introduce a variable yy s.t. eq. becomes y=xy'=-x. One also shows that xx evolves to a state of quasi-equilibrium (very quickly on time scale 1/Λ21/\Lambda^2) given by a curve on y-x plane. Then it moves along that curve, and one finds that the system must do jumps that are also very fast (on time scale 1/Λ21/\Lambda^2 again) periodically. See plot... Well... I'm ommiting many details. See starting from page 11 on notes

Synchronization and coupled oscillators

Kuramoto model


Lecture notes on nonlinear vibrations

Books:

Nayfeh

Hayashi

Nonlinear regression

guillefix 9th July 2016 at 3:57am

Nonlinear regression. Like linear regressions but the parameters enter nonlinearly in the function representation, for example as weights in a multi-layer perceptron (MLP), i.e. a ANN, with usually a few layers (shallow learning..). Vowpal Wabbit is good for logistic regression.

Nonlinear system

guillefix 8th July 2016 at 5:30pm

See Topics in Nonlinear Dynamics by Balakrishnan, and another lectures by him

Nonlinear dynamical systems (often abreviated to Nonlinear systems) are Dynamical systems where the O.D.Es or the mapping functions that describe the dynamics are nonlinear. They offer much richer behavior like bifurcations and chaos. Thus, while, locally, they can be linearized and analyzed by the same linear Jacobian techniques, they require more variety in analysis techniques, such as bifurcation theory, Lyanpunov functions, trapping regions, attractors, and chaos theory. Make subsections of these and organize better See Wiggins book, and Strogatz.

Nonlinear continuous dynamical system

Nonlinear oscillations

Nonlinear maps (aka Nonlinear Discrete dynamical system)

The theory of discrete systems has many analogies to the theory of continuous systems.

Chaos theory


Invariant manifolds in dynamical systems


Oxford course

Strogatz YB lectures

Chaos Journal


Books

Strogatz Nonlinear systems dynamics and chaos

Deterministic Nonlinear Systems: A Short Course Vadim S. Anishchenko, Tatyana E. Vadivasova, Galina I. Strelkova (auth.)

See books in oxford course website

Other lecture notes: http://www.jpoffline.com/physics_docs/y3s5/nlp_lecture_notes.pdf

More LNs: http://14.139.172.204/nptel/CSE/Web/108106024/Module5.pdf

Nonparametric statistics

guillefix 27th March 2016 at 6:59pm

https://en.wikipedia.org/wiki/Nonparametric_statistics

not based on parameterized families of probability distributions

Normalization of power laws

guillefix 23rd June 2016 at 11:22pm

See Power laws

The normalization C , assuming kk starts at 11, is related to the Riemann zeta function, C=1ζ(α)C=\frac{1}{\zeta(\alpha)}, or the generalized or incomplete zeta function C=1ζ(α,kmin)C=\frac{1}{\zeta(\alpha, k_{min})} if there is a minimum kk over which we normalize. Or we could approximate the sum needed to normalize by an integral. C=1kminkαdk=(α1)kminα1C=\frac{1}{\int_{k_{\text{min}}}^\infty k^{-\alpha}dk}=(\alpha-1)k_{\text{min}}^{\alpha-1}

Notes on Ard Louis' paper on contingency, convergence and hyper-astronomical numbers in biological evolution

guillefix 25th April 2016 at 12:05am

Modern synthesis 1. Variation unbiased 2. Space of possible genotypes very vast (even after discarding biologically unviable ones). Evolution is contingent. C.f genetic drift.

However, *Contingency in genotype space does not imply contingency in phenotype space => convergent evolution

  • Genetic code (mapping from codons to aminoacids) has redundancy. Mackay suggested this could have arrived because it was fittest code. However, evolving the code is very costly (as it would affect many proteins at one) so probably very unlikely. Thus, Crick proposed it was a "frozen accident".
  • Neutral theory of evolution.
  • pre-Darwinian evolution of genetic code. Optimized for error reduction

Protein coding Hoyle-Salisbury paradox. How can evolution find right proteins in hyperastronomically large space? Maynard Smith argument Keefe and Szostak computer experiments

Levinthal's paradox

Redundancy, correlaton, and funnel-shapped landscapes

RNA case study

When talking about the word game, word probability incorrectly used

For self assembling system the many to one map is from cluster configurations (like genotype) to physically distinct systems (like phenotype). However self assembly explores the phenotype space uniformly and thus shows a bias in the genotype space, and it's a bias against simplicity and symmetry.

Algorithm information theory.... For fixed length codes, simple codes have many ways of appearing

Fixed code lengths means we have a finite state machine. Algorithmic complexity for finite state machines?

The formula given in the slide of Solomonoff is not a probability, but an expected number of times a certain program will come up, that's why it's not normalized. For long codes, it's approximately a probability, though. No

Feed fixed input length codes with a short pefix corresponding to the map (the conditon of it being short, in particular much shorter than the inputs, could be the quantitiative condition corresponding to Ard's observation that the map should be "simple"). Then feed this to a Turing Machine (TM). Results will be of varying length. You expect shorter lengths to be more common because: Input to the TM is approximately like feeding random fixed-length codes (because prefix code is much shorter than input, by assumption, and inputs are random). If we reverse the TM, hmm no it doesn't work. Well the output will be an input that produces a fixed-length string of bits for the reversed TM, but the distribution in outputs is not random now? Are there more fixed length strings that will produce shorter codes? Seems unlikely. But im missing he many to one nature of the mapping in this description. Or hmmmm the little prefix code should make this happen somehow? What kind of "prefix codes" can do this?

Notes on Extracting Hidden Hierarchies in Complex Spatial Networks video

guillefix 26th March 2016 at 4:02am

Extracting Hidden Hierarchies in Complex Spatial Networks

See Spatial networks

  • Leaf venation. Most efficient structure to do transportation from one points to many points in a region of space is a tree-like structure. However, most modern plant leaves have an intricate loopy topolgy (reticulation) for increased resilience against damage.
  • Physarum polycepharum slime mold. Nice network. It is "smart".
  • Loop hierarchy structure of planar network, forms a tree representation of it. Nice
  • 3D networks. Connectome.
  • Quaking aspen root network. Network of extended root structure of plants symbiotic with fungi. 80% of plants do this!
  • Ant colonies
  • Sand piles, dunes & granular matter. Network in granular media under applied stress
  • Vasculature. Except in lungs, you get loopy, reticular network structure
  • Any graph can be represented in 3D (without edges crossing): Book representation!
  • A graph can't be represented in a 2D plane without edges crossing in general (graphs that can are called planar). However, graph may be embeddable (w/o edges crossing) on a 2D surface other than the plane (like a torus). The genus of the simplest surface on which a graph can be represnted like this is called that graph's graph genus.
  • Cycle space (set of loops or combination of loops that live on the graph). Fundamental basis
  • His algorithm for determining the cycle structure in a 3D graph can be proved to work (asymptotically) for 3 regular graphs. In biological networks, this is essentially always the case due to how they form. For other spatial networks, degree distribution is usually highly peaked, and the average degree is low, and he suggests that this may mean it also works for them. He thinks it won't work mostly for scale-free graphs.
  • Can start with hexagonal lattice on surface of certain genus, and apply perturbations (like dislocations)
  • Growth model for the physarum slime

Nuclear physics

guillefix 11th June 2016 at 1:55pm

Number theory

guillefix 16th May 2016 at 9:12pm

Numerical experiments on the simplicity bias in finite-state transducers

guillefix 12th July 2016 at 12:59am

Note this is calculated with zlib complexity..

Average bias over 100 samples: 0.74. 74% of the outputs states have most of the inputs.


See code here


I also have got the code working. Due to the way the libraries I'm using works, it has to be done in 5 steps: generating the fst files, converting them, running the fsts on random inputs, counting number of inputs per output, computing complexities of outputs. I'm going to write a bash script that calls these in the right order. I'm also using the (modified) Lempel-Ziv complexity measure that you use, that Chico gave me. At the moment, the random generation of fsts is done in python. I think this is fine, as the bulk of the computation is the "running" and complexity steps, which are c++. However, I found a C++ library that can randomly generate automata (http://regal.univ-mlv.fr/); I haven't yet managed to make it work, but if we do, it's maybe better to use that one.

From preliminary runs, I have indeed found the c++ to be much faster, so that I could rather quickly run 10^6 input strings on 50 random 5-states transducers. Of those 11-13 showed clear simplicity bias, the rest showing much smaller bias. This was actually using some python code that is now c++, and should now work even better.

Other statistics and complexity measures that we were talking about are yet to be implemented.

Numerical linear algebra

guillefix 27th March 2016 at 8:15pm

Oxford course

Over and over again we see a pattern like this:

nonlinear -------—linearize & iterate--------—> LINEAR

PDE ---------—discretize----------—> ALGEBRA

Because of this, computers have brought linear algebra, and numerical linearalgebra, to the forefront of the mathematical sciences.

Standard algorithms to solve linear system Ax=bAx = b, i.e. matrix inversion, grow like O(N3)O(N^3). To improve this one can:

  • use parallel computing, or
  • algorithms to take advantage of sparsity of matrix (many entries are zero)

In recent years flop count is less and less important at the high end (i.e. for many processors) – communication is a bigger bottleneck.

Numerical methods for differential equations

guillefix 21st February 2016 at 12:59pm

Initial value problems (IVP) for ordinary differential equation (ODE)

In standard form

u=f(u,t)u' = f(u,t)

could represent system of equations (i.e. uu vector).

Discretize time in steps of size kk (timestep).

Numerical methods (finite difference discretization methods):

  • Multi-stage. Runge-Kutta: 1 step, i.e. only neighbouring grid points.
    • Modified Euler. Accuracy O(k2)O(k^2)
    • Fourth order Runge-Kutta. Accuracy O(k4)O(k^4)
  • Multi-step. Adams-Bashforth. Uses points n steps away in grid. Drawback: they are tricky to start up because extra values are needed.
    • 1st order. Called (forward) Euler formula. Accuracy O(k)O(k)
    • 2nd order. Accuracy O(k2)O(k^2)
    • etc.

IVP codes in MATLAB

ode23: low-order RK ode45: higher-order RK ode113: variable-order multistep ode23s, ode15s, ode15i, ode23t, ode23tb variants for stiff problems etc.

In Chebfun

N = chebop(a,b) % define the interval [a,b]

N.op = @(x,u) ... % define the ODE, with diff(u,k) = kth derivative of u

N.bc = ... % boundary conditions

Order of accuracy, convergence, stability, etc.

See here for explanation of local truncation error (LTE), used to find order of accuracy (what we call accuracy above, O(k2)O(k^2), i.e. error decreases with the square of the time step.

Convergence and Stability

Theory of convergence of multistep formulas by Dahlquist (1956). Analogs for RK too.

Key definitions:

consistent: order of accuracy > 1.

stable: if for f(t,u) = 0 all the solutions are bounded.. I.e. does error grow or stay bounded.. See here too.

convergent: vuv\rightarrow u for each fixed tt as k0k \rightarrow 0 (ignoring rounding errors, from computing..).

Dahlquist equivalence theorem:

Convergence \Leftrightarrow consistency ++ stability

The Adams formulas are consistent and stable, hence convergent

Adaptive ODE codes adapt step size and other parameters so that estimated errors (using methods above, like LTE) are smaller than a prescribed value.

Chaos and Lyapunov exponents. The Lorenz equations. Sinai billiards is another famous chaotic system.

Stability regions regions of kaka space (aa is a parameter in the model ode, a=0 corresponds to f=0, as defined above for stability, I guess here we are being more general..) in which solutions remain bounded (this is achieved when characteristic polynomial of the recurrence relation, obtained by the finite difference method, has roots with r1|r| \leq 1 and any root r=1|r|=1 is simple). See here too.

Stiffness. A stiff ODE is one with widely varying time scales. One may need very small kk because there are modes with kaka (i.e. part of equation which create behaviour corresponding to certain aa value) outside stability region, even if our solution of interest has effective kaka inside it.

This is manifested as our solution changing on a long timescale, but depending on short time-scale terms in equation.

Solution: backward-differentiation formulas, or implicit formulas, that include f(vn+1,tn+1)f(v_{n+1}, t_{n+1}), unlike explicit formulas.

These require solving a (generally) nonlinear equation (or a system of equations for PDEs). And this may need to be solved numerically itself often, for example by Newton's method.


Aside. We've been discussing IVPs here only.

Boundary value problems (BVP) also important. Nonlinear BVP may not have unique solutions! (unlike IVP).

Can use chebfun to solve.


Partial differential equations

Now have time and space.

Simplest approach is again finite difference discretization. Now discretizing time and space.

Numerical stability

von Neumann analysis or discrete Fourier analysis. Plug imaginary (oscillatory) exponential into the finite difference formula, and see if some mode blows up (by the amplification factor being greater than 1), or not. Define region of stability thus.

PDEs can also be stiff for same reasons as ODEs, and then need to use implicit methods too. A non-linear example is the Kuramoto-Sivashinsky equation.

Order of accuracy

Defined now for both timestep kk and space step hh (see notes). To improve order of accuracy over straightforward Euler method (which is first order in kk) we use the trapezoidal rule, which is symmetric in tt (so that first order errors cancel, and is thus 2nd order in kk). In case of heat equation it's known as Crank-Nicolson formula. In the case of heat equation, it's known as the leap frog formula (1928).

Reaction-diffusion equations and other stiff PDEs. Can use exponential integrator methods... Solitons

Finite differencing in general grids

Not necessarily equally-spaced.

Principle:

1. At each xjx_j decide which data, from neighbouring points, vjr,...,vj+sv_{j-r}, ..., v_{j+s} to use.

2. Interpolate these data by polynomial of degree r+sr+s.

3. Finite difference approximation to kkth derivative is: p(k)(xj)p^{(k)}(x_j).

We don't do these steps explicitly at every step, rather there are slick algorithms to get a formula for general vvs for arbitrary grids xjx_js, and one uses that formula. See B. Fornberg, “Generation of finite difference formulas on arbitrarily spaced grids,” Math. Comput. 51 (1988), 699-706 and B. Fornberg, “Calculation of weights in finite difference formulas”, SIAM Review 40 (1998), 685-691.

In multiple space dimension same principles apply, but the system of equations needed to be solved for implicit methods corresponds to a matrix that has a much wider "band" (i.e. set of non-zero diagonals) than for 1 dimension. The structure of this matrix, in the case of discretizing the Laplacian is the famous "discrete or lattice Laplacian" (related to the Graph laplacian). See notes. This Laplacian can often be written as a Kronecker sum.

Spectral methods


Examples of Differential Equations, with nice explanations:

Trefethen et al.'s PDE COFFEE TABLE BOOK

Reaction-diffusion equations in Morphogenesis


Books:

Griffiths & Higham, 2010 - introduction to numerical ODE

Iserles, 2009 - includes connection to PDEs

LeVeque, 2007 - likewise

Hairer, Norsett & Wanner I & II - authoritative; full of fun and historical remarks

Ascher & Petzold 1998 - also includes DAEs (differential-algebraic equations, which combine ODEs and nonlinear eqs)

Deuflhard & Bornemann, 2002

Trefethen, old online textbook (http://people.maths.ox.ac.uk/trefethen/pdetext.html)

Numerical_method_PDE.png

guillefix 19th February 2016 at 1:37pm

Object-oriented programming

guillefix 17th February 2016 at 8:19pm

a.k.a. OOP

object = collection of data and functions (methods), that often act on this data.

Keywords: Encapsulation. Message-passing metaphor. data abstraction. Modularity.

Abstract data types (often implemented as classes). A class is a collection of objects with characteristics in common. A class is represented as a template from which one can instantiate objects. Instantiation is often done by "calling" the class, as if it were a function.

In many respects, classes and objects are similar.

Data hiding: One can only access instance values through defined methods. Sometimes built in language, but even if not, it is often good practice.

An object built from a class, is an instance, and it has attributes: methods and fields (variables). These are often called using . notation.

Methods in Python

  • _ _init_ _: create instance
  • _ _str_ _: printed representation
  • _ _cmp_ _: comparisons (returns -1, 0, 1).
  • _ _iter_ _ and next to define how iteration happens over an object that represents a collection.

These are doing operator overloading. In Python, dir(p) shows all methods associated with an object.

type(instance) returns the class.

Inheritance

A class can inherit attributes from another class, when its defined.

Shadowing (a.k.a. overriding an inherited method).

OOP is good for modelling systems where you have lots of elements that possibly interact.


Factory functions in JavaScript

Factories are functions that implement the same functions as classes, but have some advantages. The only disadvantage is that they are a bit slower probably.

I think this is related to prototype-oriented programming, JavaScript's version of OOP.

Octet rule

guillefix 7th July 2016 at 11:54pm

https://www.wikiwand.com/en/Octet_rule

The octet rule is a chemical rule of thumb that reflects observation that atoms of main-group elements tend to combine in such a way that each atom has eight electrons in its valence shell, giving it the same electronic configuration as a noble gas. The rule is especially applicable to carbon, nitrogen, oxygen, and the halogens, but also to metals such as sodium or magnesium.

Omega_explosion.png

guillefix 14th March 2016 at 7:45pm

Ontology

guillefix 13th July 2016 at 9:01pm

What things exist?

Contemporary ontology

  1. What are the most general features of the World, and what sorts of things does it contain? What is the World like?
  2. Why does a World exist – and, more specifically, why is there a World having the features and the content described in the answer to question 1?
  3. What is our place in the World? How do we humans beings fit into it?

See Metaphysics.

I think the answer to these questions lies in Systems theory, Mathematics, and Science.

Concepts

Entity

Property

Process


https://www.wikiwand.com/en/Categories_(Aristotle)

Operating system

guillefix 30th June 2016 at 2:39am

Operating System Basics

Managing processes

Memory allocation

Modern operating systems that are designed for multitasking, make use of Concurrent computing ideas, such as Multithreading

Interprocess communication (IPC)

System calls

File system

Operation (Mathematics)

guillefix 14th July 2016 at 12:46am

A Function between Cartesian product of Sets, and another set. Often, the domain is a Cartesian power of a single set.

Operations research

guillefix 28th June 2016 at 4:00pm

Operations research is a discipline that deals with the application of advanced analytical methods to help make better decisions.

Transportation, Assignment, and Transshipment Problems

(Basically problems in linear programming, a.k.a linear optimization)

From Winston's book on OpRes: http://www.producao.ufrgs.br/arquivos/disciplinas/382_winston_cap_7_transportation.pdf

http://uk.mathworks.com/help/optim/ug/linprog.html

Optical illusion

guillefix 1st July 2016 at 4:29am

Ambiguous Cylinder Illusion

Ambiguous Cylinders

Troika, squaring the circle, Kohn Gallery.webm

不可能モーション2 〜 Impossible Motions 2 〜

I wonder if there could be some formal analogies between how these illusions apparently distort space-time, and how General relativity works.

Optics

guillefix 7th February 2016 at 12:35am

Optimization

guillefix 8th May 2016 at 2:16pm

https://en.wikipedia.org/wiki/Mathematical_optimization

https://en.wikipedia.org/wiki/Optimization_%28disambiguation%29


Gradient descent

Newton's method.

(Offline algorithm, you process all the data at each step)

Taylor expand to second order (in multi-variate way) and minimize that (i.e. take derivative (gradient)) and set to 0. It performs upper bound minimization.

Newton CG (conjugate gradient) algorithms.

Expensive thing is computing Hessian. Approximate methods like BFGS, LBFGS.

Line search

Stochastic gradient descent

Vid

(Online algorithm, you process the data sequentially, by chunks. You need this if you do not access to all of it at the same time, or you have so much data that not all of it fits on your RAM..)

You only use a mini-batch (a small sample) of input data at a time, in practice

There're theorems that show that this converges well.

Downpour – Asynchronous SGD

Polyak averaging. Running average over the parameter values at all time steps performed up to now.

Momentum. You add inertia to the particle so that the gradient descent is not just velocity = gradient (as it'd be in viscous fluid), but it is acceleration = (viscosity) + gradient.

Adagrad: Put more weight on rare features [Duchi et al]. Very useful Rare features (i.e. value along a dimension for example) tend to have more information, i.e., they are able to tell you more about what the output yy should be. This seems maybe related to AIT.

More things on optimization

Constrained optimization

Linear programming, used in Operations research

Simplex algorithm

Nonlinear programming

Heuristic optimization

Evolutionary computing

Artificial and machine intelligence?


Hyperoptimization

Gradient-based Hyperparameter Optimization through Reversible Learning


Probabilistic programming

See links here

Memetic algorithm

Evolutionary computing

Simulated annealing ! http://www.mit.edu/~dbertsim/papers/Optimization/Simulated%20annealing.pdf

https://en.wikipedia.org/wiki/Inferential_programming

Order notation

guillefix 1st July 2016 at 2:13am

See notes for defs, Asymptotic approximation

Big O: f=O(g)f=O(g) as ϵ0\epsilon \rightarrow 0

(f could be asymptotic to const*g, or much smaller)

Small o: f=o(g)f=o(g)

f is strictly much less than g

Strict order: f=ord(g)f=\text{ord}(g)

f is strictly of order g, i.e. asymptotic to some constant times g.

Also: Big theta notation, and Big omega notation.

Ordinal pattern

guillefix 5th July 2016 at 9:28pm

Ordinal analysis

The study of Permutation complexity, which we call ordinal analysis, can be envisioned as a new kind of symbolic dynamics whose basic blocks are ordinal pat- terns.

Ordinary differential equations

guillefix 23rd January 2016 at 12:01am

Organic chemistry

guillefix 7th July 2016 at 11:54pm

Organizational studies

guillefix 8th April 2016 at 5:56pm

Origami

guillefix 31st May 2016 at 12:01am

Origin of bias in GP maps

guillefix 21st July 2016 at 3:13pm

Osmiophoresis

guillefix 3rd June 2016 at 12:27am

Osmiophoresis of a spherical shell which is permeable to solvent but impermeable to product particles refers to its development of a nonzero velocity due to osmotic forces that cause radial flows of solvent across the membrane.

Movement of a semipermeable vesicle through an osmotic gradient

Osmosis

guillefix 2nd July 2016 at 6:38pm

Osmosis is the spontaneous net movement of solvent molecules through a semi-permeable membrane into a region of higher solute concentration, in the direction that tends to equalize the solute concentrations on the two sides. https://en.wikipedia.org/wiki/Osmosis

It is often described by a "solvent potential", which is lowered by the addition of solute, and raised by increases in hydrostatic pressure. Thus, the solvent tends to flow from regions of lower to higher solute concentration, and this tendency can be countered by a sufficiently large pressure difference. However, the physical mechanisms that cause this are tricky. See description of mechanisms here: Physical mechanisms of osmosis

See also Osmotic forces for more general related effects, caused by interactions of the solute with the boundary

Osmotic pressure is defined as the external pressure required to be applied so that there is no net movement of solvent across the membrane. Osmotic pressure is a colligative property, meaning that the osmotic pressure depends on the molar concentration of the solute but not on its identity.

See Fluid mechanics, Thermodynamics. See Microhydrodynamics for other possible osmotic effects, which can also cause pressure gradients.

See also Biophysics

In Reverse osmosis, the process is reversed by applying a pressure greater than the osmotic pressure. This has applications to desalinization, for instance.

The theory of the reverse osmosis separation of solutions using fine-porous membranes

http://physics.stackexchange.com/questions/212183/physic-explanation-to-osmosis?rq=1

Capillary osmosis through porous partitions and properties of boundary layers of solutions

Negative osmosis

Molecular Understanding of Osmosis in Semipermeable Membranes

Forward osmosis: Principles, applications, and recent developments

Osmotic forces

guillefix 2nd July 2016 at 6:33am

A particular kind of Interfacial force

Manipulation of Colloids by Osmotic Forces

OxAI

guillefix 10th March 2016 at 11:52pm

Oxford Artificial Intelligence Society

http://oxai.org

In the making...

See AI meetup too

Website

Code for animated background: http://codepen.io/MarcoGuglielmelli/pen/lLCxy

Oxford

guillefix 21st June 2016 at 3:32pm

I am currently living in Oxford, and thus a big part of my activities are related to it.

https://en.wikipedia.org/wiki/The_Headington_Shark

Nexus mail

Managing finances

Oxford 3D printing Society

guillefix 27th June 2016 at 10:51pm

OxTET

guillefix 28th February 2016 at 11:44pm

Oxford Transhumanism and Emerging Technologies Society

http://oxtet.org

PageRank

guillefix 26th April 2016 at 9:23pm

See Measures and metrics for networks

There is one potentially undesirable feature of Katz centrality. An important vertex pointing to many vertices makes all those vertices important. The centrality gained by virtue of receiving an edge from a prestigious vertex is diluted by being shared with so many others (think a web directory like Google or Yahoo! pointing to my page. My page is not that central because it's just one of millions). We can solve this by making the centrality derived from neighbours be divided by their out degree:

xi=αjAijkjoutxj+βx_i = \alpha \sum_j \frac{A_{ij}}{k^{\text{out}}_j}x_j +\beta

or, in matrix form:

x=αAD1x+β1\mathbf{x}=\alpha \mathbf{A}\mathbf{D}^{-1}\mathbf{x}+\beta \mathbf{1}

where undeterminate values 0/00/0 are defined to be 00. This can also be rearranged to get x and β=1\beta=1. The result is known as PageRank, the trade name given by Google which uses this measure in their ranking algorithm.

Just like with Katz centrality, α\alpha has to be fixed and it must be less than the maximum eigenvalue of AD1\mathbf{A}\mathbf{D}^{-1}, as if it equal the centralities will blow up, and if it is above the answer turns out to be meaningless. The maximum eigenvalue (at least for an undirected network) is 11 (as can be shown using the Perron-Frobenius theorem, see footnote in page 177 of Newman book, and Meyer - Matrix analysis and applied linear algebra book. The theorem is very useful in stochastic processes on networks in general).

Google uses α=0.85\alpha=0.85

One can see that this measure is mathematically the same as that gotten by the steady state of a random walk in the network, with an added probability related to the ratio of β\beta and alphaalpha of "teleporting" to another part of the network, so that one doesn't just get stuck in nodes without out-degree in the case of directed networks or that one doesn't just recover the simple degree centrality for undirected networks.

Paleontology

guillefix 21st May 2016 at 9:21pm

Paleontology is the scientific study of life existent prior to, and sometimes including, the start of the Holocene Epoch roughly 11,700 years before (present) present.

Paper

guillefix 1st July 2016 at 11:19pm

Papers on active matter

guillefix 17th June 2016 at 6:21pm

Parsing

guillefix 29th June 2016 at 2:29am

Partial differential equations

guillefix 23rd January 2016 at 12:01am

Partial ordering

guillefix 14th July 2016 at 1:29am

A partial ordering on a Set XX is a (binary) Relation on XX, \preceq that is:

  • reflexive: for all xX,xxx \in X, x \preceq x.
  • antisymmetric: for all x,yXx, y \in X, if xyx \preceq y, and yxy \preceq x, then x=yx=y.
  • transitive: for all x,y,zXx, y, z \in X, if xyx \preceq y and yzy \preceq z, then xz x \preceq z/

A set with a partial ordering is called a Partially ordered set (or poset).

A Pre-order is a weaker kind of relation

For many common examples, the Partial ordering \preceq is often interpreted as \leq (or less than or equal).

Partially ordered set

guillefix 14th July 2016 at 1:01am

A partially ordered set (or poset) is a Set with a Partial ordering

Particle physics

guillefix 22nd January 2016 at 7:27pm

Pastes

guillefix 4th May 2016 at 8:43pm

pastes: materials that can be deformed easily (like liquids), but keep their shape after the force is applied (like solids).

Two common qualitative characteristics of the structure may be distinguished: disorder, as in most of these materials no specific arrangement can be distinguished, which explains their ability to be deformed at will without losing their mechanical properties; and crowding as the elements making up these materials interact significantly with their neighbours, which explains the solid behaviour of these systems as long as the applied forces are not too large, and from that point of view we are dealing with jammed systems.

Pastes typically consist of a suspension of small particles in a background fluid. These particles are crowded, or jammed together like grains of sand on a beach, forming a disordered, glassy or amorphous structure, and giving pastes their solid-like character.

Rheology of Soft Glassy Materials

Condensed matter: Memories of paste The authors make a remarkable observation: although the sample was completely fluidized by the large shear stress, it developed a 'memory' of the direction in which the stress was applied, and the solid-like paste slowly 'pulled back' on itself in the opposite direction, eventually passing beyond its initial position.

Path (Graph theory)

guillefix 24th January 2016 at 2:28pm

A path (sometimes called a 'walk')in a network is a sequence of of nodes such that every pair of nodes in the sequence is connected by an edge in the network. For directed networks an edge must be traversed in direction of edge; in undirected, in either direction.

Self avoding paths (a.k.a 'simple paths')don't traverse the same node or edge twice.

The length is the number of time one traverses an edge in a path.

AikAkjA_{ik}A_{kj} is only non-zero if there's a path of length 2 from i to j. The total number of such paths is N(2)ij=k=1nAikAkj=[A2]ijN(2)_{ij}=\sum_{k=1}^{n}A_{ik}A_{kj}=[A^2]_{ij}. Similarly, the total number of 3-paths is N(3)ij=k,l=1nAikAklAlj=[A3]ijN(3)_{ij}=\sum_{k,l=1}^{n}A_{ik}A_{kl}A_{lj}=[A^3]_{ij}. In general N(r)ij=[Ar]ijN(r)_{ij}=[A^r]_{ij}.

Cycles are paths that start and end at same vertex. The number of cycles of length rr is Tr[Ar]=iκir\text{Tr}[A^r]=\sum_i \kappa_i^r, where κi\kappa_i is the ith eigenvalue of AA. This can be proved by the Jordan decomposition of the matrix if it is diagonalizable (so the nilpotent part is zero, i.e. there is no 1s above the diagonal). Otherwise one can prove this using the Schur decomposition

A simple cycle is a self-avoiding cycle.

Geodesic path

Shortest path between two points, defining the geodesic distance. They are always self-avoiding because any loop could be removed to make the path shorter. By convention we sometimes assign a distance of \infty to unconnected nodes. They are not necessarily unique.

The diameter of a graph is the longest geodesic distance between any pair of connected nodes.

Eulerian and Hamiltonian paths

An Eulerian path is one that traverses each edge in a network exactly once. It is not self-avoiding in general because a node with a degree higher than two will need to be visited more than once.

A necessary condition for a graph to have an Eulerian path is that there are zero or two nodes with odd degree, the first case corresponding to beginning and ending the path on the same node, and the second case, beginning and ending on different nodes.

A Hamiltonian path is one that visits each node exactly once. It is self-avoiding because traversing an edge more than once will imply traversing a node more than once.

The general problem of finding Eulerian or Hamiltonian paths in a graph or proving their non-existence is hard and still actively researched. This was used by Euler to solve the famous Konisberg Bridge problem in 1736.

These paths have applications in computer science in: job-sequencing, "garbage collection", and parallel programming.

Path integrals for stochastic processes

guillefix 26th January 2016 at 7:03pm

The Markov property of most stochastic processes means that one can naturally construct a Path integral description. This can be used to draw parallels between stochastic processes and quantum mechanics.

From Langevin equation to path integrals

We begin with the general Langevin equation with no inertial term, but a deterministic force. We also assume the noise term is Gaussian white nosie

Ito/Stratonovitch dilemma and Multiplicative noise


See also

State-dependent diffusion: Thermodynamic consistency and its path integral formulation

Pathology

guillefix 5th July 2016 at 3:08am

https://en.wikipedia.org/wiki/Pathology

The study of the abnormal (an often restricted to detrimental) function of a biological organism.

In medicine, a physiologic state is one occurring from normal body function, rather than pathologically, which is centered on the abnormalities that occur in animal diseases, including humans.

Pearson coefficients

guillefix 13th February 2016 at 1:29pm

See section 7.12.2 of Newman book.

It is the number of common neighbours minus the expected number of common neighbours if edges were random (kikj/nk_i k_j /n), normalized in a certain way. It is also the covariance between two rows of adjacency matrix divided by the product of their standard deviations:

pearson_coefficient_network.png

guillefix 13th February 2016 at 1:28pm

People in deep learning

guillefix 9th July 2016 at 4:22am

Percolation

guillefix 16th June 2016 at 12:18am

Generally, percolation refers to qualitative changes in connectivity in systems (specially large ones) as its components are added or removed. In particular, percolation most often refers to the case where a system goes from being "mostly disconnected" to "mostly connected", in some sense. A more general mathematical model inspired by percolation and the Potts model is the Random-cluster model.

Percolation theory, from the perspective of Network theory describes the behavior of connected clusters in a network (often modelled as a random graph), as some substructures in the network are added or removed. The most common types are random site and bond percolation, where one removes either nodes or edges with a uniform probability, known as the occupation probability. However, there are other types (see below).

Again, from the perspective of networks, the transition from the system being "disconnected" to "connected", is most often made precise by the appearance of a giant connected component. See below.

Often, the theory of percolation is concerned with the clustering properties of identical objects which are randomly and uniformly distributed through space with a given occupation probability. However, these uniformity assumptions may be relaxed in other types of percolation.

Keywords: Network science, Complex systems.

from here

References

Newman's book, and Mason and Gleeson tutorial have good reviews. See more at References for percolation

Percolation theory

Mathematical theory of percolation, with several important results, and discoveries.

Percolation phase transition

Percolation phase transition

A phase transition occurs between a phase without a giant connected component and a phase with one. A giant connected component, or GCC, is a connected component that contains a finite fraction of the nodes as the network size NN \rightarrow \infty, i.e. it has an "extensive" scaling. The transition occurs at a critical value of the occupation probability, known as the percolation threshold.

Critical phenomena in percolation

Types of percolation models

Main types:

Applications of percolation models

Applications to porous materials

Applications to the study of landscapes

Applications in topography (study of landscapes) has been found, in particular relating to:

  • Statistical and fractal properties of watersheds.
  • Percolation of water bodies as water level rises in a landscape.

Percolation on Bethe lattices

guillefix 11th June 2016 at 6:39pm

The Bethe lattice is defined as a graph of infinite points each connected to z neighbors (the coordination number) such that no closed loops exist in the geometry

They are related to Cayley trees.

Several results exist for Percolation on these lattices. For instance, their Percolation threshold is pc=1/(z1)p_c = 1/(z-1), for any z3z \geq 3.

See also the chapter on this on the phase transitions book by Sole.

Percolation on hypercubic lattices

guillefix 11th June 2016 at 6:30pm

Percolation on hypercubic lattices, which can be represented as Zd\mathbb{Z}^d, where dd is the dimension of the lattice, and Z\mathbb{Z} is the set of integers of course.

Some mathematical results exist for Percolation thresholds, and Continuity of percolation phase transition. In particular, it is known that the percolation threshold at dimension dd is greater than or equal that of dimension d+1d+1 for percolation on Zd\mathbb{Z}^d.

Percolation on random graphs and networks

guillefix 11th June 2016 at 6:52pm

See Percolation theory, Random graph

See the chapter of the book.

If we let uu be the probability that a randomly chosen vertex in the graph does not belong to the giant component, then

See this chapter for random graphs with general degree distributions, and this chapter for percolations

Percolation phase transition

guillefix 11th June 2016 at 1:33am

Giant component and phase transition

Percolation is the simplest fundamental model in statistical mechanics that exhibits phase transitions signaled by the emergence of a giant connected component (or GCC, it is a connected component that contains a finite fraction of the nodes as the network size NN \rightarrow \infty, i.e. it has an "extensive" scaling, in the language of Statistical physics). The parameter that controls the existence of a GCC is the occupation probability, pp (or the "attach probability" q=1pq=1-p), the critical value at which the transition happens is called the percolation threshold

In particular, the transition is often a continuous transition (2nd oder) with a critical point. Behaviour at this point is thus an example of Critical phenomena, and at this point the system is self-similar (see Fractals), and as a consequence, many quantities follow Power laws. See section 12.2, and exercise 12.12, as well as exercise 2.13 of this book.

Percolation threshold

For random site percolation on a configuration model graph:

pc=1g1(1)=kk2kp_c = \frac{1}{g'_1(1)} = \frac{\langle k \rangle}{\langle k^2 \rangle - \langle k \rangle}

where g1(z)g_1(z) is the generating function of the excess degree distribution. See Newman's book.

Note that even if there is a GCC, its size may be small, so a full understanding of the network's resilience should include the dependence of the size of the GCC with pp.

Critical phenomena in percolation

Percolation theory

guillefix 12th June 2016 at 2:04am

The mathematical theory of Percolation.

Basic concepts

Cluster: a connected component of the occupied subgraph (the graph obtained after removing edges in the percolation process).

Probability that there exists an infinite cluster.

Probability that there exists a giant cluster (or giant component, or giant connected component (GCC)), defined as a cluster with size (number of nodes) or order O(N)O(N), as NN \rightarrow \infty (NN is the size of the whole network).

A related, but different quantity is the probability that a node belongs to a giant cluster, PP_\infty. Often it's easier to work with uu, the probability that a node is not connected to the GCC.

Another property of interesting is the Distribution of sizes for the small clusters in percolation models. A related quantity is the the mean cluster size.

The two-point correlation function, gc(r)g_c (r) is defined as the probability that if one point is in a finite cluster then another point a distance rr away is in the same cluster. This function typically has an exponential decay gc(r)er/ξg_c (r) \sim e^{-r/\xi}, rr \rightarrow \infty. ξ\xi is then the correlation length, or connectedness length. Note that the correlation length can also be defined in some other ways that measure the characteristic size of clusters, in particular one can use the radius of gyration to define it.

See here

Percolation on hypercubic lattices

Percolation on Bethe lattices

A model which is particularly tractable analytically.

Percolation on random graphs and networks

Percolation thresholds

There are some exact results for some models, in 2D for the square, triangular, honeycomb and related lattices, but not for many others, like site percolation on the square and honeycomb lattices, and bond percolation on the kagomé lattice.

Continuity of percolation phase transition

Continuum limit of percolation models

The continuum limit, at the critical point, it is often a Conformal field theory, as percolation models at the critical point are found to have conformal symmetry.

A relatively new method to describe the continuum limit of the critical lattice models is Schramm–Loewner evolution

Relations between percolation models and Potts models

Infinite clusters

There are some results on the number of possible infinite clusters which can coexist

Percolation threshold

guillefix 12th June 2016 at 6:43pm

See Percolation theory

The values of percolation thresholds are not universal and generally depend on the structure of the lattice and dimensionality, and are believed to achieve their mean-field values only in the limit of infinite dimension (Some Cluster Size and Percolation Problems). Finding rigorous proofs of exact thresholds and bounds has also been an enduring area of research for mathematicians (The critical probability of bond percolation on the square lattice equals 1/2, A bond percolation critical probability determination based on the star-triangle transformation, Percolation - Grimmett).

Exact thresholds (for bond percolation) in 2D for the square, triangular, honeycomb and related lattices were found using the star-triangle transformation (Some Exact Critical Percolation Probabilities for Bond and Site Problems in Two Dimensions). It has been shown in Exact bond percolation thresholds in two dimensions that thresholds can be found for any lattice that can be represented as a self-dual 3-hypergraph (that is, decomposed into triangles that form a self- dual arrangement). It is also shown in [G.R. Grimmett, I. Manolescu, Probab. Theory Related Fields] that thresholds can be found for any lattice that can be represented geometrically as an isoradial graph, yielding a broad new class of exact thresholds and providing a proof (The critical manifolds of inhomogeneous bond percolation on bow-tie and checkerboard lattices of Wu’s 1979 conjecture (Critical point of planar Potts models) for the threshold of the checkerboard lattice.

However, the exact value of thresholds for many systems of long interest (such as site percolation on the square and honeycomb lattices, and bond percolation on the kagomé lattice) are still missing (Recent advances and open challenges in percolation).

There exist also bounds on the percolation thresholds for infinite connected graph with maximum finite vertex degree. See Grimmett's book.

The percolation threshold for bond percolation is less than or equal to that of site percolation.

periodic table.png

Permutation complexity

guillefix 7th July 2016 at 7:24pm

See Descriptional complexity

See Permutation complexity in dynamical systems

Permutation entropy was introduced in 2002 by C. Bandt and B. Pompe as a measure of complexity in time series. In a nutshell, permutation entropy replaces the probabilities of length-L symbol blocks in the definition of the Shannon entropy by the probabilities of length-L Ordinal patterns.

Permutation Complexity Related to the Letter Doubling Map

Perturbation methods

guillefix 2nd May 2016 at 2:53pm

See SimpleMind mindmap and notes and problem sets in LectureNotes

Notes from tablet

See also lectures on YB from Bender (at PI)

Perturbation methods explore the existence of a small or large parameter to derive systematically a precise approximation. More art than science, building experience is valuable.

There are two methods for obtaining precise approximations: numerical methods and analytical (asymptotic) methods. These are not in competition but complemen teach other. Perturbation methods work when some parameter is large or small. Numerical methods work best when all parameters are order one. Agreement between the two methods is reassuring when doing research. Perturbation methods often give more physical insight.

Course materials, notes

See reading list there

Mathematical foundation: Asymptotic approximation

Applications

Perturbation methods for algebraic equations

Asymptotic approximation of integrals

Perturbation methods for differential equations

Local analysis

Local analysis of differential equations

(as discussed in Bender's book Part 2. Is this the same as regular perturbation methods, as discussed in Hinch's book? I think so).

Global analysis

Mostly for problems with regions of very different speed of change. These are singular perturbation problems, often when the small parameter ϵ\epsilon is multiplying the highest derivative. Then the ϵ=0\epsilon=0 problem is of lower order, and will in general not be able to satisfy all the boundary conditions of the original problem

Matched asymptotic expansions

Method of multiple scales

WKB method

I wonder if there are analogous to these methods to algebraic equations. Maybe through the Perturbation methods for difference equations, which are closer to algebraic equations.

Perturbation methods for difference equations

These are described in Bender's book


Laplace's method


Books:

Hinch

Bender and Orzsag

..

Perturbation methods for algebraic equations

guillefix 7th June 2016 at 2:10am

Iterative method

Faster if expansion sequence is unknown (i.e. we don't know it it's a power series or a log series for instance); slower, if the expansion sequence is known.

xn+1=g(xn;ϵ)x_{n+1} = g(x_n; \epsilon)

Espansion method

Pose (guess) expansion. For instance a power series in small parameter, ϵ\epsilon:

x=1+ϵx1+ϵ2x2+...x=1+\epsilon x_1 + \epsilon^2 x_2 + ... as ϵ0\epsilon \rightarrow 0

and substitute in algebraic equation, and equate terms of equal order because asymptotic expansions (using a fixed set of functions of ϵ\epsilon) are unique.

Easier than the iterative method, specially when working to higher orders, but must assume form of expansion.

Singular perturbations

When limit problem (ϵ=0\epsilon =0) differs in an important way from the limit ϵ0\epsilon \rightarrow 0). Main method:

Regularization method: Scale variables so that the problem becomes regular.

Non-integral powers

When power expansion fails (one of the coefficients seems to need to be \infty..), an expansion in non-integral powers may be necessary.

This happens for example when the roots at limit problem (ϵ=0\epsilon =0) are a double root. As he says from example given in the notes, we could have guessed that an order ϵ1/2\epsilon^{1/2} change in xx would be required to produce and order ϵ\epsilon change in a function at its minimum. Yeah if we are perturbing the parabola by an order ϵ\epsilon, then the new root would be the same as perturbing xx in such a way as to get the order ϵ\epsilon change in the original parabola. At the minimum of the parabola, from Taylor expanding, we see we need a larger ϵ1/2\epsilon^{1/2} change in xx to get the ϵ\epsilon change in the function.

Finding the right expansion sequence

We first pose the general expansion:

x=1+δ1x1x=1+\delta_1 x_1,   δ1(ϵ)1\delta_1(\epsilon) \ll 1

Substitute into the algebraic equation, and look for dominant balances in the result. This will involve looking for the largest terms with and without δ1(ϵ)\delta_1(\epsilon)

Once we have the first, term, we add a term to the expansion:

x=1+δ1x1+δ2x2x=1+\delta_1 x_1+\delta_2 x_2,   δ2(ϵ)δ1\delta_2(\epsilon) \ll \delta_1

And we repeat this process

Again, the iterative method is very useful when the expansion sequence is not known, and can be faster than the above method involving unknown expansion functions, δ\delta.

Logarithms

Normally appears in transcendental equations.

Use iterative method as expansion method is hard to guess.

In his example, "over this range the term xx is slowly varying while exe^{-x} is rapidly varying. This suggests rewriting the equation as ex=ϵxe^{-x} = \frac{\epsilon}{x}. I think this is so that we control/determine the faster changing term.

Pervaporation

guillefix 2nd July 2016 at 5:45pm

Phase transition

guillefix 1st May 2016 at 8:08pm

Qualitative picture

...See sec 2.2.2 on Soft Condensed Matter by Richard Jones, and also beginning chapter of Principles of condensed matter physics

  • Gas (kinetic energy dominates)
  • Liquid (kinetic and potential energy comparable)
  • Solid (potential energy dominates)

As we increase temperature, the average energy per particle, UU, increases (see Equilibrium statistical physics). Because the potential between molecules is generally bounded above (for example the attractive part can have the form of 1/r-1/r or er-e^{-r} for large rr, so that the maximum potential energy is 0), as we increase UU, we soon reach a point where we must increase the kinetic energy, as the potential energy becomes saturated (i.e. the molecules have dissociated, or we have broken the bond). Therefore, as we increase temperature, we find that we go to phases where the kinetic energy is more and more dominant, often from solid to liquid to gas, though, for low enough pressure, the liquid phase is skipped.

Phase diagrams, 2D projections of surface in a 3D space of temperature, pressure, and volume.

Critical point: point at which gas-liquid transition changes from being continuous to discontinuous.

Triple point: Point of coexistence between three phases.

Order parameters and phase fields

Order parameter: quantity that distinguishes different phases, often associated with some kind of "order", and is often 00 in disordered phase. There are two main types:

These equations describe the evolution of phase fields: the fields of the {space-time varying order parameter}. They thus belong to the so-called phase-field method used in Materials science, for example.

Mixture theory or the theory of interacting continua also uses the above equations for describing multi-phase systems. See ON THE DEVELOPMENT AND GENERALIZATIONS OF ALLEN-CAHN AND STEFAN EQUATIONS WITHIN A THERMODYNAMIC FRAMEWORK

See also Soft matter physics notes. Though, I would like to see a more rigorous derivation of these equations, based on non-equilibrium thermodynamics. The derivation are rigorous, they just use Constitutive equations that are mostly just assumed, instead of derived!

Landau theory of phase transitions

Describing phase transitions in terms of a free energy, which is a function of the order parameter, and depends on parameters (such as temperature). As one varies the parameters, the free-energy minima change location, and appear/disappear at phase transitions.

Ginzburg-Landau theory in Statistical field theory: write down most general free energy that is consistent with known symmetries of the order parameter. Assume it can be written in power series and stop hen additional terms don't change the behaviour of interest.

Symmetries. Symmetry breaking. Correlation functions, etc. Critical exponents (describe behavior of thermodynamic functions near critical point).

Order of transition

  • First order: order parameter changes discontinously . 1st derivative of free energy discontinuous.
  • Second order: order paramter changes continuously. 2nd derivative of free energy discontinuous. Like transition from liquid to gas at critical point.

Critical exponents and universality

phase_diagram.PNG

Philosophical works

guillefix 21st January 2016 at 9:03pm

Philosophy

guillefix 8th July 2016 at 3:01am

"One does not simply define Philosophy" ~ Me

Philosophy is this. It is the study of the Cosmos, from the anthropocentric perspective, of our Knowledge of the Cosmos. This is, in a fundamental sense, all we have, since the physical, objective perspective, of the Cosmos ultimately derives from our Knowledge of it.

From this perspective, Cosmos gets equalled to our Knowledge of it. Philosophy concerns itself with the study of the Cosmos from this perspective, which basically by definition, encompasses everything else here, everything one ever thinks, and is conscious of.

I call this the observer perspective, and I think it's the most fundamental.

This is in contrast to the god perspective often taken in Science, where we imagine an objective reality separate from our minds (described in Cosmography and Cosmology). This perspective has been so useful and fruitful, I consider this physical objective world to be true also, even if our only access to it is by our limited senses and mental models of it (as asserted by the observer perspective).

Most often I work with the god perspective, as in Science. However, when dealing with complex philosophical questions, I have to switch to the more fundamental observer perspective.

See also Metaphysics for another description of the above, as a view of the nature of Existence.

Portal:Contents/Philosophy and thinking

Stanford encyclopedia of philosophy


My Metaphysics: Observer/god perspective, or better name may be Mind/Physical reality.

My Epistemology: Principle of Inclusiveness

My Ethics: Utilitarianism up to the point you can. Then virtue/Emotion/Aesthetics. In particular, see discussion in Emotion.

My Logic. Don't know enough, but I think Mathematical logic may be the best description.


My Politics (ideas only): A weak dynamic social democracy, combined with a robust weighted direct democracy and a cyber-government.


One of the central ideas of this philosophy that combines observer perspective with the existence of a physical world, is the division of everything into whether information flows from observer to physical world, or vice versa. The former, I call Art; the latter, I call Science. All other sections are effectively described by these two aspects of the observer condition, which I holistically call the Conversation with Nature.

Here is a very interesting alternative to my conceptual framework: Krebs cycle of creativity


Inclusiveness Principle to arrive at Truth.

Philosophy of computer science

guillefix 21st June 2016 at 3:32pm

Philosophy of mathematics

guillefix 23rd January 2016 at 12:10am

Philosophy of mind

guillefix 21st June 2016 at 3:30pm

Philosophy of quantum mechanics

guillefix 2nd June 2016 at 1:29am

Philosophy of science

guillefix 21st June 2016 at 3:31pm

Duhem-Quine thesis: You can't test a hypothesis in isolation, but always in conjunction with ancillary assumptions.

David Wallace's homepage

Karl Popper

Beyond Descartes and Newton: Recovering life and humanity

Proposed clasification

Phoretic mechanisms of colloids

guillefix 2nd July 2016 at 6:53pm

A phoretic mechanism of colloids, is any mechanism/effect that causes colloidal particles to move in a way that is partially deterministic (unlike Diffusion), due to the gradient of some physical quantity (This seems to be the working definition, judging from what I've read). These may also called transport mechanisms.

These are important in Active matter, in Biophysics, and Nanotechnology. In particular phoretic effects can make a colloidal particle self-propelling.

A large class of mechanism for colloid transport are due to interfacial forces, due to non-trivial Microhydrodynamics, Chemical reactions, or other effects.

See Colloid Transport by Interfacial Forces. See also the more recent Manipulation of Colloids by Osmotic Forces

Generic theory of colloidal transport

Thermal non-equilibrium transport in colloids

Phoretic mechanisms

Diffusiophoresis

Osmiophoresis

Electrophoresis

Thermophoresis

Phoretic mechanisms of self-propelled colloids

Phoretic mechanisms of self-propelled colloids

guillefix 9th June 2016 at 7:36pm

Phoretic mechanisms for active colloids.

Phoretic mechanisms for living organisms (for instance living active colloids like Cells) are called Taxis. See Chemotaxis for a prominent example. Actually, chemotaxis is often applied to the phoretic mechanisms of active colloids (when they are originated by a gradient in a chemical concentration). More specifically, chemotaxis may be used to refer attraction to higher chemical concentration, while anti-chemotaxis would refers to repulsion from it.

Mechanisms

  • "Chemotaxis" (in the sense of directional alignment with chemical gradient). The fluid flows set up around the particle can turn its axis of orientation to align parallel or antiparallel to the local gradient; this process has active contributions arising from the chemical reaction as well as passive ones.
  • Polar run-and-tumble. The enzymatic rate depends nonlinearly on the local concentration of the substrate with a characteristic Michaelis- Menten form inherited from the underlying catalytic kinetics of the reactions [ 38 ]. The combination of enhanced activity at high concentrations and randomized orientation acts to effectively populate the colloids in “slow” regions [ 39 ].
  • Apolar run-and-tumble. An active colloid can also chemotax by a net motion of its center along a gradient in a noise-averaged sen. I think this is basically polar diffusiophoresis? I.e. the particle is repelled (or attracted) to gradients mostly along its axial direction (due to higher asymmetry in motility).
  • Phoretic response (referring to the standard phoretic response, also present in non-active colloids). The colloid moves along an external chemical gradient by diffusiophoresis.

Theory of phoretic mechanisms of self-propelled colloids

Photography

guillefix 21st January 2016 at 8:59pm

Photosynthesis

guillefix 8th July 2016 at 6:35pm

Photosynthesis: Crash Course Biology #8

Light dependent reactions

Basically Cellular respiration in reverse.

Water + Carbon dioxide + sunlight

Light independent reactions

Calvin cycle

Physarum machines and physarum solver

guillefix 24th June 2016 at 1:24am

physarum_machines_books.png

guillefix 27th March 2016 at 3:32am

Physical chemistry

guillefix 3rd June 2016 at 12:33am

Physical geography

guillefix 8th April 2016 at 5:33pm

Physical mechanisms of osmosis

guillefix 2nd July 2016 at 8:28pm

Physical mechanisms of Osmosis

Macroscopic/thermodynamic description

Based on Chemical potentials, Solution (Chemistry)

The solution-diffusion model: a review

Microscopic mechanism

MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES

The standard chemical potential explanation still holds as part of the mechanism. See here. The energy comes from the expansion of the solute (which works like an ideal gas), just like in quasistatic adiabatic expansion.

However, the boundary layer given by a Diffusio-osmotic effect, enhances the checmical potential difference at the pore increasing the osmotic pressure. The extra work done in the process, I think, ultimately comes from the fact that the potential energy near the wall is lowered as the solute concentration decreases during the process.

When membrane is semi-permeable (as in Osmosis proper), then I think that the main effect would be an excluded volume effect (this appears to be indeed the case, at least for purely semi-impermeable membranes see Negative osmosis), giving rise to an effective repulsive potential, like that in Nelson's Biological physics book, or those that appear in Diffusio-osmosis, or Diffusiophoresis.

See Interfacial forces

Molecular mechanisms of osmosis Mechanism of osmosis

OSMOSIS: A MACROSCOPIC PHENOMENON, A MICROSCOPIC VIEW

Osmosis is not driven by water dilution, here too. See Nelson's Biological physics book for more details

The mechanism is based on the wall repelling the solute molecules. See analysis here.

Alternative mechanisms: Osmosis, colligative properties, entropy, free energy and the chemical potential Osmosis and thermodynamics explained by solute blocking http://www.circle4.com/biophysics/chapters/BioPhysCh05.pdf

Brownian motion, hydrodynamics, and the osmotic pressure

Molecular Understanding of Osmosis in Semipermeable Membranes

See also Negative osmosis for more resources.

Osmotic pressure or decompression?

Physics

guillefix 28th June 2016 at 4:11pm

Physics (from Ancient Greek: φυσική (ἐπιστήμη) phusikḗ (epistḗmē) "knowledge of nature", from φύσις phúsis "nature") is the natural science that involves the study of matter and its motion through space and time, along with related concepts such as energy and force. One of the most fundamental scientific disciplines, the main goal of physics is to understand how the universe behaves. (wiki).

The Mechanical Universe

Nice map of physics: http://scimaps.org/maps/map/being_a_map_of_physi_171/detail

System of measurement

Physical review letters

International Centre for Theoretical Sciences

http://physics.info/

hyperphysics, etc.


Some physics books: http://www.fisica.net/ebooks/

See DB\Cosmos, etc....

Should organize this. See SimpleMind mindmap. See also Bulk matter.

https://journals.aps.org/prx/issues/6/2

Physiology

guillefix 8th July 2016 at 7:03pm

Studies the function of organisms. Goes together with Anatomy, which studies the structure of organisms. One often restricts physiology to refer to the "normal" functioning of organisms, in contrast with Pathology

https://en.wikipedia.org/wiki/Physiology

See Human physiology

see vids here

Piezodialysis

guillefix 2nd July 2016 at 3:53am

Places

guillefix 17th May 2016 at 1:14am

Places

Portal:Contents/Geography and places

Vivekananda Rock & Valluvar Statue, southernmost peak of India, where two seas and an ocean meet

Planar network

guillefix 31st January 2016 at 11:24pm

A planar network (or graph) is one that can be drawn on a plane without having any edges cross.

For these graphs we can define the Dual Graph, with vertices being faces (regions completely enclosed by edges), and edges being among faces that share an edge of the original graph. This new graph is also planar

Dual graphs were used to prove the four-color theorem by Appel and Haken, which translated to graphs is stated in terms of the chromatic number, the number of colors required to color the vertices of a graph in such a way that no two vertices connected by an edge have the same color.

Kuratowski's theorem...

As of yet, there is no popular measure of degree of planarity (i.e. how planar a graph is?)

Planet Earth

guillefix 5th July 2016 at 3:31am

🌐🌎🌍🌏∞🌌

Movements of Earth

Planetary science

guillefix 17th July 2016 at 9:53pm

Planetary system

guillefix 5th July 2016 at 3:31am

A planetary system is a set of gravitationally bound non-stellar objects in orbit around a star or star system

Plant

guillefix 8th July 2016 at 5:52pm

Plant biology (botany)

Plant cell

Evolution of plants

Evolved more than 500 million years ago, as Lycophites. These plants where so numerous that they have resulted in many coal beds from this period, now called Carboniferous

Plant cell

guillefix 8th July 2016 at 5:49pm

Plant Cells: Crash Course Biology #6

See Plant

They have a cell wall made of polysaccharides cellulose, hemicellulose and pectin; sometimes also lignin or cutin

Plant sciences

guillefix 8th April 2016 at 8:26pm

Plasma physics

guillefix 2nd May 2016 at 12:12am

Plastic

guillefix 11th May 2016 at 12:33pm

Plastic is a generic term used in the case of polymeric material that may contain other substances to improve performance and/or reduce costs.

Note 1: The use of this term instead of polymer is a source of confusion and thus is not recommended.

Note 2: This term is used in polymer engineering for materials often compounded that can be processed by flow.

https://en.wikipedia.org/wiki/Plastic

Play

guillefix 17th May 2016 at 1:29am

Point-set topology

guillefix 29th May 2016 at 12:34am

I think it's synonymous with differential topology

https://www.youtube.com/watch?v=1LwkljjLBns

Political science

guillefix 8th April 2016 at 6:15pm

Politics

guillefix 25th June 2016 at 3:37am

...


Some potentially good ideas for political systems

Social Futurism

A weak dynamic social democracy, combined with a robust weighted direct democracy and a cyber-government.

From each according to his wants, to each according to his wants, from the machines whatever else is needed. https://www.youtube.com/watch?v=F5uqZGA06vE Transhumanist declaration 1998 http://ieet.org/index.php/IEET/more/twyman20140416 http://wavism.net/ Bioneering Sociocyberneering http://www.thevenusproject.com/ http://www.thezeitgeistmovement.com/ Polymatharchy http://en.wikipedia.org/wiki/Idea_of_Progress

A more in depth summary:

1. weak (not many powers given to the "admin bods" as Rusell Brand calls them (https://www.youtube.com/watch?v=3YR4CseY9pk&t=4m56s). Also see: https://www.youtube.com/watch?v=gy0R56sZ0ts to understand this better.)

2. dynamic democracy (citizens can vote to change leaders at any time, Details: http://www.ted.com/.../a_dynamic_democracy_where_lea.html)

3. social (ok, this is a broad term. I will give it here the definition of focusing on social causes, that is on the betterment of all people's lives. Include ideas like the declaration of human rights, basic income and income taxes here.)

4. strong direct democracy (refers to most decisions being taken by all the citizens by some voting/discussion scheme, probably striving for >50% majority. A note here is that less restrictions would generally be put on corrective decisions, rather than initiative ones, because of the biggest imperative of avoiding harm than of enhancing some quality. This idea is called corrective democracy: http://www.fee.org/the.../detail/can-we-correct-democracy...)

5. weighted (means that people who have certain qualifications or have acquired certain merits have a bigger saying in issues)

6. cyber-government (this refers to both the technologies used to implement a lot of the above, and to the general idea of creating an advanced nervous system for society (see https://www.youtube.com/watch?v=5zn8MRKOskw&t=78m18s for example), from which everyone can get informed and inform others, and which can on itself help on arriving at decisions (by different possible kinds of AI))

–experimental politics would also be a thing, as more people start viewing politics as the "social tech" it is. Freedom will thus be enhanced by voluntarism in things like startup cities: http://startupcities.org/hacking-law-and-governance-with.../ On short, my view is that technology allows governance to really be put on the hands of the citizens, but this must be done in an intelligent and supervised way.

Polymer

guillefix 11th May 2016 at 2:12pm

A polymer is a molecule composed of a small molecular unit repeating in a chain; usually >100>100 units. The chain may have complicated topology, like branches, or cross-links. Links can also be made between different polymers (of different chemical composition for instance). These all determine the polymer architecture.

Polymer physics

Polymer chemistry

Example of polymer: https://en.wikipedia.org/wiki/Polystyrene

Examples of polymers

Polymer architecture

Main architectures:

More specific examples of architectures:

Interestingly, when one closes a linear-chain polymer into a loop, the viscosity drops dramatically.

Polymer physics

guillefix 11th May 2016 at 1:59pm

Polymer physics deals with the physical properties of Polymers. A polymer is a molecule composed of a small molecular unit repeating in a chain; usually >100>100 units

Polymer statics

Isolated polymer molecule in solution

The ideal chain

Model polymer chain like random walk.

Can include effect of short-range interactions

A variant is the Gaussian chain

Distribution of segments in the polymer chain

Non-ideal chains

Concentrated solutions and melts

Thermodynamic properties

Flory-Huggins theory

Chemical potential and osmotic pressure

Phase separation

Polymer gels

Polymer dynamics

Molecular motion of polymers in dilute solution

Rouse theory

Zimm theory

Molecular motion in entangled polymer systems

Rheology of polymers


Books and resources

http://cbp.tnw.utwente.nl/PolymeerDictaat/

Introduction to Polymer Physics - M. Doi

The Theory of Polymer Dynamics - M. Doi & S.F. Edwards


People

S.F. Edwards

P.G. de Gennes

Doi


Viscoelastic fluids

Reptation

cross-linking rubbers

Polymorphic limit (Wright-Fisher model)

guillefix 12th April 2016 at 2:18pm

(See Arrival of the frequent for context)

If NLμ1NL\mu \gg 1, the population naturally spreads over different genotypes, a regime called the polymorphic limit. See Polymorphic limit (Wright-Fisher model) tiddler for more.

To model neutral exploration, we let 1+sp=δpq1+s_p = \delta_{pq}, where δpq \delta_{pq} is a Kronecker delta, so that only qq has some fitness, and all other phenotypes have 00 fitness, and so, even if a mutation produces them, no offspring can inherit from them. At every generation, all offspring inherits from Nq\mathcal{N}_q only, and thus the population can only spread by mutations over a single generation jump, and it is most likely to stay mostly within Nq\mathcal{N}_q, if NN is large enough.

We should note that equations, like Eq.3 would be the same, even though we assumed that all the individuals are in Nq\mathcal{N}_q, because, as 1+sp=δpq1+s_p = \delta_{pq}, all the selection weight is in Nq\mathcal{N}_q, which produces the same results. More precisely, in the expression i=1NΦp~(gi,si)N(1+si)j=1N(1+sj)\sum_{i=1}^N \tilde{\Phi_p}(g_i, s_i) \frac{N(1+s_i)}{\sum_{j=1}^N (1+s_j)} only NN' (the number of individuals in Nq\mathcal{N}_q) elements are non-00 in the sum and so in the mean-field approx (where we assume Φp~(gi,si)\tilde{\Phi_p}(g_i, s_i) is constant) the NN' from the sum cancels the j=1N(1+sj)=j=1Nδpq=N\sum_{j=1}^N (1+s_j) = \sum_{j=1}^N \delta_{pq} = N' from the denominator, leaving a NN on the top.

In the mean-field approximation the expected number of individuals with phenotype pp produced per generation is now independent of time, and given by Eq. 3. (we thus simply call mp(t)=mpm_p(t) = m_p), under the corresponding assumptions, because even if not all of the population are in Nq\mathcal{N}_q, the assumption of fitness, we've made gives selective weight only to those in Nq\mathcal{N}_q (see Wright-Fisher model).

As we said above, the number of individuals with genotype pp (p-type) will follow a binomial distribution, with probability mp/Nm_p/N of success (getting p-type offspring), and number of trials NN, and therefore the probability to get at least one such individual is:

P(at least on p-type offpsring)=1P(no p-type offspring)=1(1mp/N)N1empP(\text{at least on p-type offpsring}) = 1 - P(\text{no p-type offspring}) = 1 - (1 - m_p/N)^N \approx 1 - e^{-m_p}

After TT generations, we have run the Bernoulli trial TNTN times, and thus the number of p-type individuals we have gotten, summed over all the TT generation also follows a Binomial distribution, but with NTNT samples, and same probability. Thus

P(at least on p-type offpsring over T generations)1empTP(\text{at least on p-type offpsring over T generations}) \approx 1 - e^{-m_p T}

Thus, the time when {{the probability of having discovered a p-type individual (produced a p-type offspring)} is α\alpha} is found by:

α=1empT\alpha = 1 - e^{-m_p T}

empT=1αe^{-m_p T} = 1- \alpha

mpT=ln(1α)-m_p T = \ln{(1- \alpha)}

T=ln(1α)NLμΦpq T = \frac{-\ln{(1- \alpha)}}{N L\mu \Phi_{pq}}

Eq. 4

Where we used Eq. 3 in Arrival of the frequent.

Population genetics

guillefix 7th May 2016 at 5:59pm

Mathematical population genetics

See Evolution

See Wright-Fisher model, Arrival of the frequent, Monomorphic limit (Wright-Fisher model), Polymorphic limit (Wright-Fisher model).

Second Bangalore School on Population Genetics and Evolution

School and Discussion Meeting on Population Genetics and Evolution (video lectures)

Some terms: gene, genotype, allele, (gene) locus, haploid, diploid, homozygote, heterozygote, heterozygosity, monoecious, dioecious, polymorphism,link age, recombination.

https://en.wikipedia.org/wiki/Haplodiploidy

Fixation time

Intuition

Coalescent


Computational biology - An evolutionary approach

https://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution

Some mathematical models from population genetics course

Mathematical Population Genetics lecture notes

Theoretical evolutionary genetics - Felsenstein (book), pdf

Probability Models for DNA Sequence Evolution

Population Genetics V: Neutral Theory

Wright-Fisher model with some stuff on the coalescent

Random Genetic Drift & Gene Fixation

Some mathematical models from population genetics book

Moran model

Genetic Drift and Effective Population Size

Heterozygosity and the Wright-Fisher model (stackexchange)

Quantitative genomics (MIT) ppt

STOCHASTIC MODELS FOR GENETIC EVOLUTION

Diffusion Process Models in Mathematical Genetics

Short course on statistical population genetics

ON THE PROBABILITY OF FIXATION OF MUTANT GENES IN A POPULATION’

THE AVERAGE NUMBER OF GENERATIONS UNTIL FIXATION OF A MUTANT GENE IN A FINITE POPULATION'

Notes on population genetics and evolution: “Cheat sheet” for review

Intuitive explanation of fixation time

See Probability theory and Stochastic processes

Sampling with and without replacement

Porous material

guillefix 9th May 2016 at 8:37pm

https://en.wikipedia.org/wiki/Porous_medium

A porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame"

Transport processes in fractals—I. Conductivity and permeability of a leibniz packing in the lubrication limit

Material porosity and permeability

Porous materials

A porous material most often refers to porous solids, i.e. porous materials where the matrix is a solid.

Porous solids

If the porosity of a porous solid is high enough, it is also falls under the category of Foams, and many of these are very flexible materials.


Solid-gas Dispersed media do form materials with pores, but they are different from porous solids in that the location of these pores can change as the material is strained or disturbed in some way.

Granular material

Fibrous material

Post-cyberpunk

guillefix 4th February 2016 at 9:46pm

Power laws

guillefix 24th June 2016 at 12:19am

See also Scale-free networks

Scanned Notes on Power Laws

A power law distribution for kk has the form:

pk=Ckαp_k=Ck^{-\alpha}

where α\alpha is the exponent,

Normalization of power laws

Moments of power laws

Lorenz curves for power law distributions

Zipf, Power-laws, and Pareto - a ranking tutorial

http://www.necsi.edu/guide/concepts/powerlaw.html

Similarity of Symbol Frequency Distributions with Heavy Tails


Top-heavy distributions

Power laws often mean that rare events are more likely that one could have thought, because the tail "dies off" more slowly than in, say exponential distributions, like Gaussians

More on power laws

Power Law Distributions, 1/f Noise, Long-Memory Time Series

Power-law distributions in empirical data

Similarity of Symbol Frequency Distributions with Heavy Tails

Power set

guillefix 14th July 2016 at 2:23am

The set of all Subsets of a Set

Power spectral density

guillefix 10th May 2016 at 7:16pm

Power transmission

guillefix 1st July 2016 at 6:55pm

power_laws.png

guillefix 28th January 2016 at 1:41am

Power-law distributions in empirical data

guillefix 23rd May 2016 at 11:03pm

Power-line communication

guillefix 1st July 2016 at 6:53pm

https://en.wikipedia.org/wiki/Power-line_communication

Main current applications in narrow-band networking:

Pre-order

guillefix 14th July 2016 at 12:58am

A pre-order on a Set XX is a (binary) Relation on XX, that is reflexive and transitive.

Prefix code

guillefix 4th July 2016 at 11:54pm

aka prefix-free, or instantaneous code

A string is a prefix of another string if their first nn symbols coincide, for some n1n \geq 1.

A prefix code is a Variable-length code where no codeword is a prefix of another codeword.

(IC 2.5) Prefix codes

(IC 2.6) Prefix codes - remarks and what's next

Any prefix code is uniquely decodable

A prefix code can be represented as a search tree, and is a nice way to think about prefix codes.

The above definition may called left-prefix. There is also the notion of right-prefix. See here

Example to see why prefix codes are faster (in the sense of computational complexity) to decode than other uniquely decodable codes. Prefix codes are decodable in linear time

Preimage

guillefix 7th July 2016 at 6:34pm

Printing press

guillefix 1st July 2016 at 11:23pm

Nuremeberg, XV century

Probabilistic dynamical system

guillefix 7th July 2016 at 8:08pm

Measure-theoretical dynamical system where the measure is a Probability measure

I invented this term, not sure if it already exists.

Probability measure

guillefix 7th July 2016 at 6:15pm

A Measure PP on set Ω\Omega s.t. P(Ω)=1P(\Omega)=1

Probability space

guillefix 7th July 2016 at 6:23pm

https://en.wikipedia.org/wiki/Probability_space

  1. A Measurable space
    1. A Set, called the Sample space, Ω\Omega, which is the set of all possible outcomes.
    2. A Sigma-algebra, referred to as the set of events F\mathcal{F}, where each event is a set containing zero or more outcomes.
  2. A Probability measure, which corresponds to the assignment of probabilities to the events; that is, a function PP from events to probabilities.

Probability theory

guillefix 17th July 2016 at 11:26pm

Probability Theory Wiki article. Mathematica foundations of probability

Probability, Mathematical Statistics, Stochastic Processes

Probability space

Based on Measure theory

Probability Primer

Probability space

Random variable


Basic results in probability theory


Probability distribution function

Cumulative distribution function

Moments and cumulants

Generating functions

Central Limit Theorem


Combinatorics

Probabilistic method (see book)

Procedural generation

guillefix 5th July 2016 at 1:17pm

Procedural graphics

guillefix 15th July 2016 at 9:37pm

Process

guillefix 8th July 2016 at 3:15am

A change in the properties of something through time.

See also Activity

Dynamical system, Category theory.

Product topology

guillefix 14th July 2016 at 2:45pm

The product topology on a Cartesian product of nn Topological spaces (Xi,τiX_i, \tau_i, iIi \in I, where II is some index set) is defined to be the union of all sets of the form O1×O2×...×OnO_1 \times O_2 \times ... \times O_n where OiXiO_i \subset X_i is τi\tau_i-open. Where we are assuming here II is finite. This definition is not correct when II is infinite, and the definition using cylinder sets below must be used. Note that the definitions are different because the basis is constructed from finite intersections of the open cylinders. However, some elements corresponding to infinite Cartesian products of the form ×iIOi\times_{i\in I} O_i can't be realized from finite intersections of open cylinders which all have the form (×i=JOi)×(×iIJXi)(\times_{i = J} O_i) \times ( \times_{i\in I \setminus J} X_i), where JJ is a finite subset of II. This comes about for example, in infinite Sequence spaces.

It can also be constructed using Filter subbases and Filter bases (that generate the open sets of the topology)

Note the elements U(j,O)U(j, O) forming the subbase are part of the final topology. They have the form O1×O2×...×OnO_1 \times O_2 \times ... \times O_n described above if we rememeber that the full set XiX_i is always open.

The sets forming the subbase are known as open cylinders, while those forming the basis are known as Cylinder sets.

Another equivalent way of defining the product topology is as the 'smallest' topology such that the projection functions πj:×iXiXj\pi_j:\times_i X_i \rightarrow X_j, ff(j)f \mapsto f(j) are Continuous functions.

A smaller subbase is given by the Cylinder sets

Programmable matter

guillefix 1st July 2016 at 2:04am

Programming

guillefix 1st July 2016 at 2:07am

Programming language

guillefix 30th June 2016 at 1:04am

Programming

Programming language paradigms

Programming paradigms

Imperative: give instructions to change the state of the program

Declarative: just write statements (assertions) of what things do, what functions do they perform. Then the program can take inptus and give outputs by passing inputs through the various nested functions (Functional programming).

Visual programming languages Nice example: https://vvvv.org/


Most programming languages are context-free. http://stackoverflow.com/questions/898489/what-programming-languages-are-context-free. See Theory of computation

Programming languages

C/C++

Python

JavaScript

Assembly (programming language)


Other languages

Go, Lisp, Clojure,

https://www.rust-lang.org/

Esoteric programming languages

Projects, ideas, action

guillefix 9th April 2016 at 1:08pm

Projects, ideas, action, is about new ideas, the brink of the known, the edge of the philosophical Cosmos.

Interdisciplinary, antidiscilpinary, etc. New emergent ideas. Things that don't fit

Also: thinking what to do, and doing.

Lifes of important/influential people: http://fundersandfounders.com/

http://www.iftf.org/home/


Facebook, twitter news feed...

Prokaryotes

guillefix 22nd April 2016 at 11:59pm

Bacteria

One of the most studied model organisms is the Escherichia coli

Gram staining is a method of staining used to differentiate bacterial species into two large groups (gram-positive and gram-negative), by detecting peptidoglycan, which is present in a thick layer in gram-positive bacteria

Actinobacteria is a phylum of Gram-positive bacteria with high guanine and cytosine content in their DNA

Streptomyces is the largest genus of Actinobacteria

Streptomyces hygroscopicus produces Sirolimus, also known as rapamycin which is an inhibitor of the Kinase enzyme Mechanistic target of rapamycin

Proof theory

guillefix 29th March 2016 at 3:10pm

Property

guillefix 8th July 2016 at 3:15am

An Entity has a certain property if it belongs to the Set that {the Concept corresponding to that property} represents.

Protein

guillefix 11th May 2016 at 1:11pm

Protein are large biomolecules, or macromolecules, consisting of one or more long chains of amino acid residues.

Protein engineering

guillefix 3rd March 2016 at 11:49pm

Educational portal of the awesome Protein databank : http://pdb101.rcsb.org/

Psychology

guillefix 5th July 2016 at 3:56am

Public health

guillefix 8th May 2016 at 10:16pm

https://en.wikipedia.org/wiki/Public_health

"the science and art of preventing disease, prolonging life and promoting health through organized efforts and informed choices of society, organizations, public and private, communities and individuals."

Python (programming language)

guillefix 13th July 2016 at 3:39pm

Quantum condensed matter physics

guillefix 18th May 2016 at 5:42pm

Oxford, Fabian Essler C6 physics notes

Advances in Graphene, Majorana fermions, Quantum computation

New questions in quantum field theory from condensed matter theory

Second quantization

Ideal Fermi gas

Weakly interacting bose gas

From Hamiltonian can derive Gross-Pitaevskii equation

http://www.nii.ac.jp/qis/first-quantum/forStudents/lecture/pdf/qis385/QIS385_chap4.pdf

Bogoliubov approximation

Alternative using density matrix..

Spin waves in ferromagnets

Electrons in solids

Quantum liquids

Superconductors, superfluids

Trapped ultra-cold gases

Quantum field theory

guillefix 29th May 2016 at 12:25am

LectureNotes

and link Gingkoapp tree here too.

https://www.maths.tcd.ie/~fionn/

Quantum information theory

guillefix 19th July 2016 at 4:57pm

Quantum liquid

guillefix 18th May 2016 at 5:42pm

Strictly speaking, a quantum liquid is a spatially homogeneous system of strongly interacting particles at temperatures sufficiently low that the effects of quantum statistics are important.

In practice the term is used more broadly, to include those aspects of the behavoir of conduction electrons in metals and degenerate semiconductors which are not sensitive to the periodic nature of the ionic potential.

See also Quantum fluid and Quantum spin liquid

Quantum mechanics

guillefix 11th June 2016 at 1:50pm

Quantum statistical physics

guillefix 18th May 2016 at 5:44pm

Based on the density matrix. Naturally extends the classical formalism of Statistical physics

R language

guillefix 15th July 2016 at 9:42pm

A Programming language for Statistics

https://www.r-project.org/

Good IDE: RStudio

R programs on the web!: Shiny

See lynda.com lectures

Random automata

guillefix 29th June 2016 at 7:07pm

Random Boolean network

guillefix 24th June 2016 at 12:28am

Random Boolean networks: Analogy with percolation (Stauffer)

guillefix 15th June 2016 at 5:30pm

See Dynamical Instability in Boolean Networks as a percolation Problem, Boolean network

Random Boolean networks: Analogy with percolation

Lattice sites can be divided into two groups: sites susceptible to damage, and sites stable against damage. If the initially flipped centre spin belongs to an infinite connected network of sites susceptible to damage, then the initially small damage will spread over the whole system.

A scaling theory for the Kauffman model, analogous to that for percolation, is presented in the Appendix.

From simulations it is observed that moving sites, i.e. those not having local period one, cluster together into groups of connected neighbours. These clusters are ramified, similar to those of percolation theory. Indeed, for p below p, one only has clusters of finite periods, whereas for p above p, we find, besides these finite clusters of finite periods, one infinite cluster of infinite period in addition.

In another set of simulations, the ratio of final to initial damage is interpreted by Derrida and Stauffer (1986) as a susceptibility, similar to the ratio of magnetization to magnetic field in ferromagnets. Indeed, simulations indicate that this quantity diverges if p approaches pc from below. The long-time limit of the damage for infinitesimal initial damages follows a typical second-order phase transition curve.

of lattice sites; thus perhaps the nearest-neighbour square lattice is not the most realistic model of these biological aspects.

Random deterministic automata

guillefix 5th July 2016 at 5:23pm

Random automata, Deterministic finite automaton

Enumeration and Generation of Initially Connected Deterministic Finite Automata implemented in python FAdo library.

Initially connected means that, for each state q there exists a directed path from the distinguished start st ate to q . I think another name for an automaton or an state of one, with this property, is accessible.

Enumeration and random generation of accessible automataStirling numbers of the second kind

Random Deterministic Automata

Using Analytic combinatorics

Functional graph (see article) corresponding to a total map from [n][ n ] (set {1,2,3,...,n}\{1,2,3,...,n\}) to itself, consists of components, each a cycle of trees (a forest whose roots are connected by a cycle). Note that the nodes in the trees have edges pointing toward the root. This combinatoric structure emerges from the constraint that the out-degree is exactly 11 for all nodes in the functional graph.

As an example of applying the symbolic method, and singularity analysis of analytic combinatorics, they find the asymptotic value of the average number of cyclic points (points (nodes) belonging to a cycle), which is πn/2\sqrt{\pi n/2}, nn being the number of points..

See definitions of transition structure, automaton, accessible automaton, etc in article.

One can also show that the expected number of points with 00 in-degree (garden-of-eden points) is, asymptotically e2ne^{-2} n. One can also show that with high probability a transition structure is not accessible.

We look at the set of nn-node transition structures whose nodes have in-degree at least 11, except, possibly the initial state (call the set TnT_n'). This set has asymptotically the same cardinality as the set of accessible transition structures, up to a multiplicative constant. It's easy to show that there is a bijection between this set and the set of all surjections between [kn+1][kn+1] and [n][n]. The number of this is, asymptotically,

S(nk+1,n)αkβknnkn+1S(nk+1, n) \sim \alpha_k \beta_k^n n^{kn+1}

where αk>0\alpha_k >0, and βk(0,1)\beta_k \in (0,1), are computable constants. Note that because βk<1\beta_k <1, this number is much smaller than nnk+1n \cdot n^{k+1}, the total number of transitions structures. This agrees with the previous argument that accessible structures are sparse. Also note that αkβkn\alpha_k \beta_k^n is the probability that a random map between [kn+1][kn+1] and [n][n] is a surjection. Good showed this (see ref in article).

See more remarks in article.

A more relevant question may be the number of isomorphic classes of accessible automata; however, symmetries (just like in Feynman diagrams), make the counting difficult. However, for accessible automata, the counting is simplified, due to a certain bijection, and the number of elements per isomorphic class is n!n!.

More References on random deterministic automata

On the Probability of Being Synchronizable

An algorithm for road coloring

Graph structure of random automata

Diameter and Stationary Distribution of Random r-out Digraphs

The graph structure of a deterministic automaton chosen at random slides

The size of the largest strongly connected component of a random digraph with a given degree sequence

What about the giant out-component? They don't talk about it !?

Random graph

guillefix 1st July 2016 at 5:23am

Graphs with probabilistic properties

Erdős–Rényi model

The most common random graph model is the Erdős–Rényi model. Random connections among a given set of nodes.

See the chapter of the book.

Configuration model

http://tuvalu.santafe.edu/~aaronc/courses/5352/fall2013/csci5352_2013_L11.pdf

Random graph with given degree distribution

See this chapter

.... See Newman's book on Networks

Probability on Graphs

Random Graphs, Geometry and Asymptotic Structure

https://www.youtube.com/watch?v=pylTEAyUQiM

THE PHASE TRANSITION IN INHOMOGENEOUS RANDOM GRAPHS

Random Graphs and Complex Networks. Vol. II

Random graphs with general degree distributions

guillefix 26th February 2016 at 12:30am

Configuration model

Sample calculations

Average number of edges between two nodes is

kikj2m1kikj2m\frac{k_i k_j}{2m-1} \approx \frac{k_i k_j}{2m}

in the limit of large size. This is approximately equal to the probability of an edge between the two nodes in the limit of large size too.

Excess degree distribution

Generating functions for the small components

See derivation in problem sheet or notes or book, using generating functions (in particular it's "power" property where the g.f. of a sum of independent random variables is the product of g.f.s of these rand. vars.)

Giant component

Can find expression for size of giant component.

One can then derive condition for existence of giant component in the configuration model. It is called the Malloy-Reed condition:

k22k>0 GCC\langle k^2 \rangle -2\langle k \rangle > 0 \Leftrightarrow \exists \text{ GCC}

GCC = giant connected component.

Random graphs with clustering

Degree-triangle model

Variant, that has tunable clustering coefficient.

See here and here

Random map

guillefix 17th July 2016 at 11:26pm

aka random map model, or random mapping

For each point in phase space, one chooses at random another point in phase space as being its successor in time, i.e. we have a random map, TT, from a finite Set of MM points, to itself. It can be shown to be a limiting case of a Kauffman Random Boolean network, with in-degree KK \rightarrow \infty.

To each attractor (labelled by ss), we assign a weight WsW_s corresponding to the fraction of points in its basin of attraction.

See also Analytic combinatorics

Statistics of attractors

Probability distribution of size of basin of attraction

Joint probability distribution of two attractor weights

Probability distribution of Y=sWs2Y=\sum\limits_s W_s^2

Probability that a random map of MM points is indecomposable (i.e. map has a single attractor)

QM=(M1)!)MMn=0M1Mnn!π2MQ_M = \frac{(M-1)!)}{M^M} \sum\limits_{n=0}^{M-1} \frac{M^n}{n!} \sim \sqrt{\frac{\pi}{2M}}

\sim is for large MM

Probability that the map is indecomposable and the attractor is of period ll:

QM(l)=M!(Ml)!Ml+1Q_M(l) = \frac{M!}{(M-l)!M^{l+1}}

Probability distribution of number of attractors

See Probability Distributions Related to Random Mappings , A Property of Randomness of an Arithmetical Functions

The average number of attractors is A=12logM+O(1)\langle A \rangle = \frac{1}{2} \log{M} + O(1)

Probabilities related to a point chosen at random

Probability that a randomly chosen point falls into an attractor of weight WW and period ll

Probability that a randomly chosen point ends up on an attractor of weight ll:

P(l)=nlΓ(M)Γ(Mn+1)1MnP(l) = \sum\limits_{n \geq l} \frac{\Gamma{(M)}}{\Gamma{(M-n+1)}} \frac{1}{M^n}

For large MM, this gives

P(l)=1Mxey2/2dyP(l) = \frac{1}{\sqrt{M}} \int_x^\infty e^{-y^2/2} dy

where l=Mxl = \sqrt{M} x. This gives the average l=Mπ8\langle l \rangle = \sqrt{M} \sqrt{\frac{\pi}{8}}, and variance l2l2=M(23π8)\langle l^2 \rangle - \langle l \rangle^2 = M \left ( \frac{2}{3} - \frac{\pi}{8} \right )

Random mappings with constraints, and other extensions

In Probability Distributions Related to Random Mappings , some of the above results are extended to the case without self-1-loops, T(i)iT(i) \neq i, and where the function is one-to-one


The random map model: a disordered model with deterministic dynamics

Probability Distributions Related to Random Mappings

A Property of Randomness of an Arithmetical Functions

The Expected Number of Components Under a Random Mapping Function

Probability of Indecomposability of a Random Mapping Function

Probability Distributions Related to Random Transformations of a Finite Set

Weighted Random Mappings; Properties and Applications.

Some remarks about computer studies of dynamical systems

Random-Energy Model: Limit of a Family of Disordered ModelsRandom-energy model: An exactly solvable model of disordered systems

Random mappings

Random allocations

Random Forests

Random matrix product

guillefix 3rd July 2016 at 6:52pm

Random matrix theory

guillefix 13th July 2016 at 3:53pm

Random variable

guillefix 2nd July 2016 at 3:10pm

Random walk in a graph

guillefix 11th February 2016 at 12:01am

A random walk is a path across a network created by taking repeated random steps. They are usually allowed to traverse edges more than once, and visit vertices more than once. If note it is a self-avoiding random walk.

We consider a random walk where at each vertex one will take a step (i.e. walker does not stay in vertex) along each of the edges connected to it, with uniform probability, i.e. with probability 1ki\frac{1}{k_i}, where kik_i is the degree. Thus, on an undirected network we have:

pi(t)=Aijkjpj(t1)p_i(t)=\sum\frac{A_{ij}}{k_j}p_j(t-1)

or p(t)=AD1p(t1)\mathbf{p}(t)=\mathbf{A}\mathbf{D}^{-1}\mathbf{p}(t-1)

Where pi(t)p_i(t) is the probability that the walker is at vertex ii at (discrete) time tt, and where D=diag(k1,...,kn)\mathbf{D}=diag(k_1,...,k_n). One can also write this relation in terms of the reduced adjacency matrix, D1/2AD1/2\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}, and that can be useful sometimes.

We are interested in the limit as tt \rightarrow \infty where we expect the probability to approach a steady state p()p\mathbf{p}(\infty) \equiv \mathbf{p}:

p=AD1p\mathbf{p}=\mathbf{A}\mathbf{D}^{-1}\mathbf{p}, which can be rewritten as LD1p=0\mathbf{L}\mathbf{D}^{-1}\mathbf{p}=0, so D1p\mathbf{D}^{-1}\mathbf{p} is an eigenvector of the Graph laplacian (L\mathbf{L}) with eigenvalue 00, but we known (see Graph laplacian) that in a connected network only the vector 1=(1,1,1,...)\mathbf{1}=(1,1,1,...) has eigenvalue 00. Therefore pkip \propto k_i, so normalizing p=kiiki=ki2mp = \frac{k_i}{\sum_i k_i}=\frac{k_i}{2m} (see Degree of a vertex (Graph theory))

With a random walk, an interesting question is that of the mean first passage time, or the mean number of steps before reaching a certain node, when starting from a given node. To find this we consider an absorbing random walk, where a walk that arrives at a certain set of vertices (we will consider just one, call it vv) will stay there.

We can then consider the probability pv(t)p_v(t) of being at vertex vv at time tt. This is the same as the probability that the first passage time is equal to or less than tt, and thus the probability that it is exactly tt is pv(t)pv(t1)p_v(t)-p_v(t-1), and the mean first passage time is:

τ=tt[pv(t)pv(t1)]\tau =\sum_t^\infty t[p_v(t)-p_v(t-1)]

Note that we can't rearrange terms in this sum, because it is not absolutely convergent!

Following the manipulations shown in Newman's book (section 6.14), we get to:

τ=1DL1p(0)\tau = \mathbf{1} \cdot \mathbf{D'}\mathbf{L'}^{-1} \cdot \mathbf{p'}(0)

where the prime ' indicates that the vvth element, or the vvth row and columns have been removed. In particular L\mathbf{L'} is called the vvth reduced Laplacian. This can be re-expressed a bit further, following Newman's book, for computational convenience.

Resistor networks

Kirkoff's current law can be written as:

jAijVjViR+Ii=0\sum_j A_{ij} \frac{V_j-V_i}{R} +I_i=0

where IiI_i is an external current applied at some node in the network. This can be written in terms of the Graph laplacian as:

LV=RI\mathbf{L}\mathbf{V}=R\mathbf{I} \quad (\dagger)

where V\mathbf{V} is the vector of voltages. L\mathbf{L} is not invertible, but this cooresponds to the arbitrariness in the value of voltages ViV_i, which can be all shifted up and down and still satisfy the equation. This is equivalent to adding a multiple of the 1\mathbf{1} vector, which we know to have a 00 eigenvalue of the Graph laplacian. However, if we fix the voltage at some node (to be 00 say), then we can remove the corresponding columns and rows from the equation (\dagger), and the 00 eigenvalue is removed, and the reduced Laplacian is now invertible, so we can get the voltages, and thus the currents!

Some applications

Random walk sampling method for social networks

Random walk betweenness measure.

Random walk on a directed graph

guillefix 29th June 2016 at 7:03pm

Random-cluster model

guillefix 15th June 2016 at 4:45pm

A family of probabilistic models invented by Fortuin and Kasteleyn which include Percolation, and the Ising and Potts models as special cases.

The configuration space of the random-cluster model is the set of all subsets of the edge-set EE, which we represent as the set Ω=0,1E\Omega={0,1}^E. The model may be viewed asa parametric family of probability measures ϕp,q\phi_{p,q} on Ω\Omega. When q=1q=1, we recover bond Percolation, when q=2q=2, we have the Ising model, and when q=2,3,4,...q=2,3,4,... we have different versions of the Potts model.

It turns out that long-range order in a Potts model corresponds to theexistence of infinite clusters in the corresponding random-cluster model. In this sense the Potts and percolation phase transitions are counterparts of one another.

Reference: Grimmet - The Random-Cluster Model

Rate equation

guillefix 2nd June 2016 at 2:13am

Averaged version of a Master equation. Used, for instance in Chemical kinetics, and in Epidemiology.

Rational numbers

guillefix 7th February 2016 at 2:36am

Proof that square root of two is irrational

Imagine the Pythagorean squares associated with the sides of a right-angle triangle with equal leg sizes. By the Pythagorean theorem, the square corresponding to the hypotenuse has the same area as the sum of the squares of the legs.

Now if the ratio was a rational number, then one could choose a size for squares such that an integer number of them fitted on the sides of the triangles, and similarly the squares could be partitioned into these unit squares.

This means that the number of unit squares in the big square equals the sum of the number of unit squares in the squares of the legs, but as these are equal, this is just twice the number of unit squares from one leg. Therefore the number of unit squares in the big square must be even.

If the number of unit squares is even, cutting the square in half perpendicular to a side should give an integer number of squares. If the number of unit squares in a side wasn't even, cutting by a half would cut unit squares by a half. And there would be as many of these half unit squares in each half as there are unit squares in a side of the square. As the number in each half must be integer, this number must be even, which is a contradiction. Therefore, the number of unit squares in a side is even.

On the other hand, if we begun choosing the minimum ratio between the sides, this means that the number of unit squares in a leg is not even, for it were we could just halve the number of squares in both.

Now considering cutting the initial right-hand triangle in half parallel to the hypotenuse. As the number of unit squares in the hypotenuse was shown to be even, the number of unit squares in the half-hypotenuse is an integer. Now the triangle formed by this segment and the leg of the original triangle is geometrically similar to the original triangle, and their sides are partitioned into integers. However, the leg, which acts as the hypotenuse of the new triangle, doesn't have an even integer number of unit squares, while we just showed that it should.

Therefore, the initial assumption that there was such an integer ratio must be wrong.

Raw material extraction

guillefix 7th May 2016 at 4:27pm

Ray Solomonoff

guillefix 4th May 2016 at 2:12am

http://people.idsia.ch/~juergen/ray.html

https://en.wikipedia.org/wiki/Ray_Solomonoff

Ray Solomonoff (1926-2009), pioneer of Machine learning, founder of Algorithmic Probability theory, father of the Universal Probability Distribution, creator of the Universal Theory of Inductive Inference. First to describe the fundamental concept of Algorithmic Information or Kolmogorov Complexity. In the new millennium his work became the foundation of the first mathematical theory of Optimal Universal Artificial Intelligence.

ReactJS

guillefix 20th July 2016 at 1:38pm

A framework for Frontend web development

meteor + react

react basics

JSX

Components

-> Class component. Can have state.

-> Stateless function component. Doesnt have state

Component properties (props)

Values and methods passed to a component when we use it (like arguments)

vid

proptypes, default properties

States

values and methods managed by the component itself.

vid

References (refs)

way of referencing an instance of a component from within a react app. It's like a DOM id of a component, that you can use to refer to that component.

vid

Component lifecycle

Adding or removing components to the dom is called mounting and unmounting. vid

updating. Even if we use shouldComponentUpdate to stop the component from rerendering, the state and props are still updated.

Higher order components

vid

Real analysis

guillefix 22nd January 2016 at 11:52pm

Real-space renormalization group

guillefix 16th June 2016 at 8:15pm

A Renormalization group scheme based on coarse-graining and rescaling over real space.

Real-space renormalization group and percolation

See Critical phenomena in percolation

Renormalization Group Theory - Percolation. In particular, see here.

A real-space renormalization group for site and bond percolation

Recurrent neural network

guillefix 29th June 2016 at 3:38pm

Recurrent neural nets. Vanishing gradient problem, naively, RNNs don't give you long term memory.. so you have Long short-term memory networks

Reference (computer science)

guillefix 13th July 2016 at 3:34pm

References are nothing but constant pointers in C/C++

References for percolation

guillefix 11th June 2016 at 2:05am

See Percolation

Section on percolations on Mason and Gleeson's book on Dynamical processes on networks, and on Newman's networks book. In particular, see Newman's book chapters 12, 13, and 17, for detailed calculations of GCC sizes, and other ones. Note that the standard calculation determines whether a GCC exists for an infinite network (for instance, the locally tree-like assumption is valid for infinite networks, and other parts of his calculations assume infintie size). Finite size effects should be interesting to explore.

Recent advances in percolation theory and its applications

Percolation Exercises

Percolation Theory notes

See Complex systems LectureNotes.

Percolation slides

Scanned Notes on Percolation

Percolation, Second Edition by Geoffrey Grimmett

References on random deterministic automata

guillefix 5th July 2016 at 5:00pm

References on the Duffing oscillator

guillefix 11th June 2016 at 1:40am

Book: The Duffing Equation: Nonlinear Oscillators and their Behaviour

See MMathPhys miniprojects and Duffing oscillator

More papers and references:

https://en.wikipedia.org/wiki/Intermittency

https://en.wikipedia.org/wiki/Crisis_%28dynamical_systems%29

Y. Ueda, Steady Motions Exhibited by Duffing’s Equation: A Picture Book of Regular And Chaotic Motions

Catastrophes with Indeterminate Outcome Stewart, H. B. ; Ueda, Y.

EXPLOSION OF STRANGE ATTRACTORS EXHIBITED BY DUFFING'S EQUATION - Yoshisuke Ueda

Common dynamical features on periodically driven strictly dissipative oscillators (introduces torsion and winding numbers)

Comparison of bifurcation sets of driven strictly dissipative oscillators

Wada basins

https://en.wikipedia.org/wiki/Lakes_of_Wada

Wada basin boundaries and basin cells Other link

Unpredictable behavior in the Duffing oscillator: Wada basins

Testing for Basins of Wada

Response Of A Harmonically Excited Hard Duffing Oscillator – Numerical And Experimental Investigation

Experimental investigation of the response of a harmonically excited hard Duffing oscillator From here

Analytical methods

Exact analytical solutions for forced cubic restoring force oscillator Uses Jacobi elliptic function (only for undamped Ueda oscillator I think).

A comparison of classical and high dimensional harmonic balance approaches for a Duffing oscillator

Second order averaging and bifurcations to subharmonics in duffing's equation

Subharmonic Oscillations in Nonlinear Systems

Chaotic states and routes to chaos in the forced pendulum

Organization of periodic orbits in the driven Duffing oscillator

Structure in the bifurcation diagram of the Duffing oscillator

superstructure in the bifurcation set of the duffing equation

General case of crisis-induced intermittency in the Duffing equation for double-well Duffing oscillator.

On the jump-up and jump-down frequencies of the Duffing oscillator

More books:

Chaos in Nonlinear Oscillators: Controlling and Synchronization By M Lakshmanan, K Murali

Antimonotonicity reversal of period-doubling cascades

Reflexivity

guillefix 14th July 2016 at 12:57am

Reflexivity refers to a property of a binary Relation, RR on XX:

for all xX,xRxx \in X, x R x

Regression analysis

guillefix 9th July 2016 at 4:40am

Discriminative Supervised learning where the output value is continuous, and quantiative (i.e. it has an ordering, and a notion of closeness (matrix)).

Notes

Linear regression

Nearest-neighbour classification

Kernel linear regression

Nonlinear regression

Multiclass MLP

Regularizer helps control the model complexity (by constrianing the size of the parameter vector θ\mathbf{\theta}). It can also be seen as adding a prior (in Bayesian statistics)


https://www.wikiwand.com/en/Regression_analysis

Regular expressions

guillefix 30th June 2016 at 1:40am

regular_equivalence.png

guillefix 13th February 2016 at 1:24pm

Reinforcement learning

guillefix 12th July 2016 at 12:31am

See Machine learning

https://en.wikipedia.org/wiki/Markov_decision_process

Markov_decision_process: definition

A Markov decision process is a 5-tuple (S,A,P(,),R(,),γ)(S,A,P_\cdot(\cdot,\cdot),R_\cdot(\cdot,\cdot),\gamma), where

  • SS is a finite set of states,
  • AA is a finite set of actions (alternatively, AsA_s is the finite set of actions available from state ss),
  • Pa(s,s)=Pr(st+1=sst=s,at=a)P_a(s,s') = \Pr(s_{t+1}=s' \mid s_t = s, a_t=a) is the probability that action aa in state ss at time tt will lead to state ss' at time t+1t+1. I.e. what happens when you take an action
  • Ra(s,s)R_a(s,s') is the immediate reward (or expected immediate reward) received after transition to state ss' from state ss. What reward you get when something happen
  • γ[0,1]\gamma \in [0,1] is the discount factor, which represents the difference in importance between future rewards and present rewards.

(Note: The theory of Markov decision processes does not state that SS or AA are finite, but the basic algorithms below assume that they are finite.)

Optimal policy problem

The core problem of MDPs is to find a "policy" for the decision maker: a function π\pi that specifies the action π(s)\pi(s) that the decision maker will choose when in state ss. Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain.

The goal is to choose a policy π\pi that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:

t=0γtRat(st,st+1)\sum^{\infty}_{t=0} {\gamma^t R_{a_t} (s_t, s_{t+1})}    (where we choose at=π(st)a_t = \pi(s_t))

where  γ \ \gamma \ is the discount factor and satisfies 0 γ <10 \le\ \gamma\ < 1. (For example, γ=1/(1+r) \gamma = 1/(1+r) when the discount rate is r.) γ \gamma is typically close to 1.

Because of the Markov property, the optimal policy for this particular problem can indeed be written as a function of ss only, as assumed above.

Learning algorithms

MDPs can be solved by Linear programming or Dynamic programming.

Dynamic programming approach

The algorithm has the following two kinds of steps, which are repeated in some order for all the states until no further changes take place. They are defined recursively as follows:

π(s):=argmaxa{sPa(s,s)(Ra(s,s)+γV(s))} \pi(s) := \arg \max_a \left\{ \sum_{s'} P_a(s,s') \left( R_a(s,s') + \gamma V(s') \right) \right\}
V(s):=sPπ(s)(s,s)(Rπ(s)(s,s)+γV(s)) V(s) := \sum_{s'} P_{\pi(s)} (s,s') \left( R_{\pi(s)} (s,s') + \gamma V(s') \right)

V(s)V(s) will contain the discounted sum of the rewards to be earned (on average) by following that solution from state ss.

Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.

There are variants, in particular value iteration and policy iteration described in the Wiki page.

  • Trust Region Policy Optimization [1]
  • Proximal Policy Optimization (i.e., TRPO, but using a penalty instead of a constraint on KL divergence), where each subproblem is solved with either SGD or L-BFGS
  • Cross Entropy Method

RL Course by David Silver

Deep reinforcement learning

See Nando's lectures

OpenAI Gym

https://gym.openai.com/docs

https://github.com/openai/gym

Example: https://github.com/joschu/modular_rl

Pavlov.js - Reinforcement learning using Markov Decision Processes

See also Decision theory

Relation

guillefix 14th July 2016 at 1:07am

A relation is a subset of a Cartesian product.

A relation is often used to refer to a binary relation, which is a subset of X×YX \times Y. An element xXx \in X is said to be related to yYy \in Y (denoted xRyxRy) if the pair (x,y)RX×X(x,y) \in R \subset X \times X.

A relation on XX is used to refer to a subset of X×XX \times X.

A Function F:XYF: X \rightarrow Y defines a relation, but not all relations correspond to functions.

Examples of relations

Total orderingPartial ordering

Equivalence relation


http://mathworld.wolfram.com/Relation.html

Relations between percolation models and Potts models

guillefix 12th June 2016 at 2:12pm

See Percolation theory

In 1969, Fortuin and Kasteleyn (FK) [27,28,103,104] found an interesting mapping between the q-state Potts model, which includes the Ising model for q = 2, and a correlated bond-percolation model called the random-cluster model. It can be shown that there is a one-to-one correspondence between different thermodynamic quantities and their geometric counterparts based on the statistical and fractal properties of FK clusters.

This allowed powerful renormalization group ideas to be used [74].

Swendsen and Wang [105], and then Wolff [106], have exploited this mapping to devise extraordinar- ily efficient Monte Carlo algorithms.

There are mappings between the Ising model at a given dimension and a model of manifolds surrounding the geometric spin clusters.

Percolation and the Potts model. Many of the tools of Statistical physics have been applied to percolation through these mappings.

Relations between the stability of Boolean networks and percolation

guillefix 1st July 2016 at 5:03pm

Relativity

guillefix 16th May 2016 at 9:09pm

Religion

guillefix 17th May 2016 at 1:23am

Renormalization

guillefix 11th June 2016 at 9:17pm

See Gingkotree, and books.

See also Renormalization group

Renormalization group

guillefix 12th June 2016 at 1:06am

See also Critical phenomena, field theory...

A method to obtain macroscopic properties from microscopic theories, among other things. The general framework (as applied to critical phenomena) is presented below. For other applications, the later steps will be different, but the general setup is the same.

1. Define RG scheme (often involving coarse graining and scaling; this is the case, for instance, in Real-space renormalization group), that defines new variables, while leaving the partition function fixed (or at least approximately fixed).

2. This scheme produces an RG transformation on the couplings/parameters of the theory. This transformation, if iterated, produces an RG flow in the space of parameters. The flow can indeed be analyzed with the tools of the theory of Dynamical systems

3. Any point near a fixed point in the space of parameters has relevant and irrelevant (and possibly marginal) directions. These correspond to natural coordinates related to the unstable and stable manifolds of a fixed point, which in the linear neighbourhood of the fixed point are called scaling variables. Relevant directions are the ones that determine the long-time dynamics under the RG flow.

4. Changes in tunable parameters of the theory (like temperature, volume, external magnetic field, etc.) can be related to changes in coupling constants that produce the same change in the free energy. These changes should be along relevant directions because tunable parameters can affect the qualitative macroscopic behaviour of the theory, and so should affect the long-time behaviour of the theory under the RG flow.

5. A critical surface corresponds to the stable manifold of a saddle fixed point (this manifold is also called separatrix in Dynamical systems theory, because it separates qualitatively different future flows, corresponding to different phases, in a physical system). A critical point of a family of theories parametrized by a parameter, and spanning a 1D manifold (curve) in the whole space of theories is the intersection of this curve with the critical surface.

6. Theories near the critical point evolve to the vicinity of the fixed point under a finite number of iterations of the RG transformation. Theories with slightly different tuning parameters evolve to slightly different points in the vicinity of the critical point. In particular, it can be argued that for a bicritical point (with two relevant directions), there will be a relevant variable that corresponds to "thermal" deviation, utt/t0u_t \sim t/t_0, and another to a "magnetic" deviation, uhh/h0u_h \sim h/h_0 (see Cardy's book for some more explanation). Here deviations refer to deviations from the critical point. These relations are linear simply because we are taking tt and hh to be small (near the critical point), and we have Taylor expanded (assuming relation is analytic). t0t_0, and h0h_0 are called scaling factors and are non-analytic.

7. From the RG scheme, one easily derives how utu_t, and uhu_h change close to the fixed point, using the linearized RG flow. From the RG scheme, one can also easily find how the free energy (per volume, or per site), ff changes under RG flow, and thus how it changes under changes of utu_t, and uhu_h.

8. Finally by relating utu_t, and uhu_h to tt, and hh, the renormalization group allows us to find how the free energy changes under changes of thermodynamic variables (tt, and hh), and thus it allows us to find thermodynamic coefficients and quantities (which are derivatives of ff w.r.t to thermodynamic quantities, such as tt, and hh), as functions of the thermodynamic variables tt, and hh. These often have power law form, and from them we can extract critical exponents. These critical exponents turn out to depend just on the dimensionality, and the eigenvalues of the relevant variables near the fixed point. Thus, any theory with a critical point flowing to this same fixed point will have the same critical exponents, and is said to belong to the same universality class.

These last steps can be seen carried out for the case of the spin-block transformation (a particular RG scheme) in Cardy's book, or in this page. The resulting form of the free energy is:

where Φ\Phi is known as a scaling function

The scaling coefficients turn out to be:

Scaling relations relate the critical exponents as explained in picture.

Sometimes, because of the generality of this, the above form of the free energy is assumed instead of derived from RG, and this is known as the scaling hypothesis. See this series of videos: 6. The Scaling Hypothesis Part 1

Resource management

guillefix 1st July 2016 at 11:06pm

Reverse osmosis

guillefix 2nd July 2016 at 4:56am

A process in which you revert the osmotic flow by applying a pressure larger than the osmotic pressure, has many applications in Industry, for instance in desalinization technologies. See Osmosis

See also Piezodialysis for alternative.

Colloidal fouling of reverse osmosis membranes

Rheology

guillefix 11th May 2016 at 1:26pm

Rheology is a branch of Continuum mechanics that studies the flow of matter, primarily in a liquid state, but also as 'soft solids' or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. That is, rheology does not study a particular class of matter, but the flow of any matter.

Robert H. Goddard

guillefix 25th June 2016 at 3:34am

One of the fathers of Rocketry

Robotics

guillefix 25th June 2016 at 4:07am

Rocketry

guillefix 25th June 2016 at 3:35am

Rotational dynamics

guillefix 16th July 2016 at 3:45pm

See the mechanical universe

Moment of inertia

Rubber

guillefix 11th May 2016 at 2:20pm

A rubber is a viscoelastic Polymer (also called elastomer). What makes it viscoelastic is most often that the polymer is cross-linked (though not too cross-linked, as that can lead to rigid materials).

Traditionally, cross-linking was done by exposing natural latex to sulfur, a process known as vulcanization.

Although rubbers are viscoelastic, there is really a continuum between solid and viscoelastic, and some are closer to solids, with others are more clearly viscoelastic.

Silly putty is interesting (apart from fun), because it has viscoelastic properties, but the polymers it's made of are not cross-linked, they are just very long!

http://www.open.edu/openlearn/science-maths-technology/science/chemistry/introduction-polymers/content-section-5.2.1

Viscoelastic Behavior of Rubbery Materials

Glass transition temperature

There is a temperature, called the glass transition temperature, below which a cross-linked polymer stops being viscoelastic (and thus a rubber), and becomes glassy, and hard.

Above the glass transition temperature, the polymer chains are loose and floppy, and that's why a rubber classifies as a soft material.

Rubbers are also thermoplastic.

Natural rubber

Synthetic rubber

Definitions of terms relating to the structure and processing of sols, gels, networks, and inorganic-organic hybrid materials (IUPAC Recommendations 2007)

Sage (CAS)

guillefix 11th February 2016 at 1:55am

Saving offline

guillefix 17th January 2016 at 3:18pm

To save a working copy for offline editing (and then uploading elsewhere, like in personal webpage):

  • Go to control panel -> Saving -> Backup Url
  • Look for last backup link, right click and "Save link as"

Scale-free networks

guillefix 8th May 2016 at 10:21pm

Networks with power-law degree distributions are sometimes called scale-free networks. A power law degree distribution has the form:

pk=Ckαp_k=Ck^{-\alpha}

where α\alpha is the exponent, and is found in many examples of real-life networks, and in many other places (see Power laws). Values 2<α<32<\alpha<3 are typical. Also typically, the power law is only obeyed for the tail of the distribution, but not for small values of kk. And typically it is also not obeyed in the high end, for example, due to some cut-off.

Detecting and visualizing power laws

The simplest approach is a log-log plot of the histogram of the degree distribution (see Large-scale structure of networks). One problem is that the tail of the distribution, where the power law is usually followed, often has very few samples, and so statistical fluctuations are relatively larger, and make it hard to judge if the distribution follows a straight line in the log-log plot. Finding the right bin size is a way to improve this, but this is always a matter of compromising larger bins to reduce statistical error on tail, and smaller bins to get more detail of the distribution.

An even better strategy is to increase the size of bins for larger degrees (normalizing by bin size so that the different bins can be compared). A way to do this is with logarithmic binning, where each bin is a constant factor larger than the previous bin, often 22.

Another way to detect power laws is by using the cumulative distribution function, PkP_k, which is the probability that the degree of a vertex is kk or larger (i.e. Pk=k=kpkP_k=\sum_{k'=k}^\infty p_{k'}). If pkp_k follows a power law (for k>kmink>k_{\text{min}} say), then PkP_k also does approximately for those kk (as can be shown by approximating the sum by an integral), with exponent α1\alpha-1. As plotting this function does not require binning (as the noise gets smaller in the cumulative distribution, and is smallest in the tail!), it doesn't throw away information. One way to get this information is via the ranks of the vertices, i.e. their position in a list ordered in descending order (this agrees exactly with their cumulative frequency, if no nodes have same degree, and this is approx true for the tail of distribution). These plots are often called rank/frequency plots.

One disadvantage of cumulative distribution functions is that nearby points are correlated, and so a linear fit using standard techniques (like least squares) which assume independence of points, give biased answers. In fact this is also true for the degree distribution function itself, although for different reasons ([72,141] in Newman's book).

[72] has many details including a formula for determining α\alpha from the data directly (the most reliable way), and other useful results and tools.

For more properties see Power laws.


Another important characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law.

Wikipedia page

Scattering theory

guillefix 8th February 2016 at 3:25pm

scents.jpg

Schramm–Loewner evolution

guillefix 11th June 2016 at 2:40pm

Science

guillefix 17th June 2016 at 11:46pm

The knowledge, methods, and everything else regarding the understanding of the Cosmos. This includes essentially structures based on logic (and Mathematics, in general) that must match what is observed in the Cosmos. See The Scientific Method-Richard Feynman, and Philosophy of science

Formal sciences and philosophy

Lay the concrete foundation for the rest of the sciences, by looking at fundamental structures and ideas. From the more theoretical to the more applied:

Philosophy of science -> Mathematics -> Theoretical computer science -> Mathematical methods and Scientific computing

Portal:Contents/Mathematics and logic

Natural science

Natural science is often defined as the part of science studying natural phenomena (that is, those not cause by Humans). These are in some sense, the foundational sciences, as everything (including Humanity) is ultimately part of Nature (the Physical world).

Roughly, we can categorize the natural sciences, in order of the complexity of the studied systems, forming a sort of hierarchy, or emergent new phenomena:

Physics -> Chemistry -> Biology -> Cognitive science

Portal:Contents/Natural and physical sciences

Systems sciences

Systems sciences studies very complex natural phenomena, as well as human phenomena (which are of course, a result of natural phenomena, but often of the highest complexity we know). It is the application and integration of the more reductionist ideas of the foundational sciences to larger systems.

Systems Sciences are the highest level of complexity, looking at parts of the Cosmos made of many parts interacting in complex ways.

Some of the most important ones are Social sciences, which look at societies (large collections of complex agents).

Portal:Contents/Society and social sciences


The distinctions above are fuzzy, and a bit ambiguous. This is partially because of the History of science being very complex, and with conflicting ideas of how science should be organized.

However, as can be seen from above, my preferred way of organizing it is an approximate hierarchy of complexity: from simple (reductionist) laws to complex systems.


Wikipedia:Portal/Directory/Science and mathematics

Thaumaturgy in the Age of Science by Prof. V. Balakrishnan

Free MIT books: https://archive.org/details/mitlibraries

Two minutes papers

Crowdfunded science: https://experiment.com/

Science and Art

guillefix 22nd March 2016 at 3:40am

Scientific computing

guillefix 3rd July 2016 at 4:58am

Scientific experiment

guillefix 17th June 2016 at 1:42am

Sculpture

guillefix 31st May 2016 at 12:01am

Sea

guillefix 3rd May 2016 at 11:55pm

Search algorithms

guillefix 31st January 2016 at 12:35am

Seat

guillefix 5th July 2016 at 4:04am

A seat is a place to sit

Seat as a verb, also means: arrange for (someone) to sit somewhere.

second_passage_path.jpg

Selection_162.png

Selection_163.png

Selection_164.png

Selection_165.png

Selection_166.png

Selection_167.png

Selection_168.png

Selection_169.png

Selection_170.png

Selection_171.png

Selection_172.png

Selection_176.png

Selection_192.png

Selection_194.png

Selection_196.png

Selection_197.png

Selection_198.png

Self-assembly

guillefix 10th June 2016 at 12:02am

Self-assembly of active colloids

guillefix 18th June 2016 at 1:09am

Active colloid, Self-assembly, Collective behaviour of active colloids

Self-assembly of active colloidal molecules with dynamic function

Self-Assembly of Catalytically Active Colloidal Molecules: Tailoring Activity Through Surface Chemistry online

While individual colloids that are symmetrically coated do not exhibit any form of dynamical activity, the concentration fields resulting from their chemical activity decay as 1/r and produce gradients that attract or repel other colloids depending on their surface chemistry and ambient variables. This results in a nonequilibrium analog of ionic systems, but with the remarkable novel feature of action-reaction symmetry breaking.

Effective phoretic interactions

See Collective behaviour of active colloids for further derivations of similar effective interactions between active colloids.

The effective interaction, in the far field regime turns out to be analogous to the Coulomb interaction with generalized charges, that break action-reaction symmetry. In particular, we differentiate between the charge that produces the field, α , and the charge that responds to the field, μ .

Model and simulation: There is a highly successful and widely used restricted primitive model (RPM) for charged colloid based on Coulomb interactions augmented with short-range steric repulsion between the particles. A generalization is done to the nonequilibrium active colloids, and the model is analyzed using Brownian dynamics simulations, to explore novel phenomena in this system.

Periodic boundary conditions are used, and interactions are treated using the minimal image convention (what is this?)

Approximations

for simplicity, use a model in which the catalytic activities of the colloids are simplified into net production or consumption of chemicals with given rates. They also assume the substrate concentration is constant within the time of their simulations, which is a good approximation in the dilute limit.

we do not consider the anomalous superdiffusion at relatively short time scales

In the studied experimental systems, the Peclet number is small (Peclet number is Pe=Vσ/D\text{Pe} = V \sigma /D, where VV is the velocity of colloid, σ\sigma is its diameter, and DD is the diffusion coefficient of the solute molecules). This means that the solute concentration profile relaxes very quickly to a comoving cloud when a colloidal particle moves. At finite Pe\text{Pe}, the cloud is distorted. This also mean that we can ignore the spontaneous symmetry breaking (spontaneous autophoretic motion of isotropic particles) at large Pe\text{Pe}.

Concentration fields are assumed to be far-field. Near-field fields would have to be calculated by solving the diffusion equation, and the resulting forces will in general not be pairwise additive. However, the forces retain the action-reaction asymmetry, and will only affect the dynamics quantitatively.

Hydrodynamic interactions are ignored, but their effect would just change the dynamics quantitatively (and not qualitatively). See more details of the model here. For the results they use to estimate the effect of hydrodynamic interactions see Hydrodynamic simulations of self-phoretic microswimmers

Brownian dynamic simulation is done so that the colloids are constrained to move in 2D (while the diffusing particles diffuse in 3D, so the concentration still decays as 1/r1/r).

Non-equilibrium effects

When the effective interactions between the particles are not symmetric, the system cannot reach an equilibrium state because the condition of detailed balance will not be fulfilled. This can manifest itself in the form of frustration that leads to nonequilibrium fluxes. This also mean that the long time behaviour may include limit cycles (oscillatory instability see below).

Cluster with oscillatory instability

The internal dynamics of quasi-stable (for small perturbations) clusters for the case of two kinds of particles (A and B) can be analyzed using d'Alembert's principle (see their Appendix). A Hopf bifurcation can take place (where the parameters are the charges of the two kinds of particles), so that in a certain regime a stable limit cycle forms. This is the oscillatory instability. This is demonstrated in the A4B8 colloidal molecule.

What symmetry makes the second harmonic absent? Probably some dynamical symmetry

Cluster with run-and-tumble behaviour

In the AB3 molecule one finds that in many parameter regimes, there are two stable configurations, and the system stochastically jumps between the two. One of the configurations has the B colloids symmetrically placed around the A, while in the other they are asymmetrical, causing (due to the asymmetry of the forces of the colloids in the fluid) a net self-propelling velocity.

The motion of the internal degrees of freedom is again derived using d'Alembert's principle. There is an angle variable which is cyclic, due to rotational invariance, and gives a conservation law. The other two angles follow a set of coupled ODEs which have equilibria corresponding to the stable configurations.

By simplifying the dynamics to the line where the two angles are equal (because both equilibria lie on it), one can obtain a single-variable Langevin equation and a corresponding Fokker-Planck equation to study the probability distribution of the system, which can be used to find, for instance, how much time is spent on run vs tumble behaviour. This was measured from the Brownian dynamics simulations. The residence times in the run-and-tumble phases exhibit an exponential dependence on the value of μ~A\tilde{\mu}_A. The measured behaviours are consistent with what we expect from Kramers’s first-passage time theory

D'Alembert's principle in overdamped dynamics

Self-averaging

guillefix 12th July 2016 at 1:03pm

A quantity is self-averaging if its sample to sample fluctuations vanish in the thermodynamic limit.

Non-self-averaging quantities are characteristic of Disordered systems

Self-diffusiophoresis

guillefix 17th June 2016 at 12:26am

In Self-diffusiophoresis (a kind of self-propulsion), a particle itself produces the compound it interacts with, through Diffusiophoresis, causing it to move.

Self-phoretic particle. Creates something that it then attracts or repells, and that something then pushes the surrounding fluid (creating a slip velocity). The particle is then indirectly pushing on the fluid. Same kind of indirect propulsion as ionocrafts!

Another analogy for the symmetric catalytic Active colloids, in the limit that particles of type B are attracted to A, but A is not attracted to B (see this paper): B particles are like little homing missiles that target A particles.

An example is a particle that catalyzes the reaction 2H2O2 → 2H2O + O2, creating an O2 gradient and interacting with it. Another example is a particle that facilitates the polymerization of a biopolymer (e.g. actin), which creates a gradient because individual monomers diffuse, whereas the polymers do not. The latter process is one possible mechanism for the propulsion of Lysteria bacteria by means of actin 'comet tails'.

http://www.sas.upenn.edu/~tidema/research.html

Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products

Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products (article)

"For a totally impermeable particle, depletion of the molecules near its surface causes a lateral slip velocity that results in net motion of the sphere. ". Depletion only if the mobility is positive, which corresponds to the surface of the particle repelling the solvent molecules thus depleting

The diffusiophoretic effects also turn out to contribute to the diffusion of the particle (the induced velocities have a random component), with a diffusion constant that can be estimated.

Consideration of rotational diffusion is important, as it determines the time scale over which the particle is able to move consistently in a given direction.

Dynamics and efficiency of a self-propelled, diffusiophoretic swimmer

Self-Diffusiophoresis in the Advection Dominated Regime

Concentration around a self-diffusiophoretic particle

Diffusiophoresis

See Diffusiophoresis for the equations giving the drift velcity of the particle given a particular concentration cc distribution on its surface (found from above equation).

Designing phoretic micro- and nano-swimmers

Collective behaviour of active colloids

Self-driving car

guillefix 30th June 2016 at 11:11pm

Self-electrophoresis

guillefix 17th June 2016 at 6:27pm

Self-organization

guillefix 3rd July 2016 at 2:47pm

Stuart Kauffman Books

Self-organization in non-equilibrium thermodynamics - Book by Prigogine et al

Information Measures of Complexity, Emergence,Self-organization, Homeostasis, and Autopoiesis

https://www.youtube.com/watch?v=Ba0zSNYkWtw


http://pcp.vub.ac.be/SELFORG.html

The Meaning of Self-organization in Computing. See Complexity theory, Complex systems, Sloppy systems Several people at the Free University of Brussels seem to be working on complex systems, from a very holistic approach.

Evolution, and feedback.

How does one define evolving systems that accomplish a desired function? We need the right feedbacks in a complex system. But the answer is not obvious. See Evolutionary computing


On Self-Organizing Systems and Their Environments

Principles of the self-organizing system

http://bactra.org/thesis/single-spaced-thesis.pdf

Self-Organisation of Symbolic Information See Written language. Selforganization of symbols and information


Self-organizing map in unsupervised Machine learning

Self-organized criticality

guillefix 16th June 2016 at 12:17am

Self-propelled particle

guillefix 17th June 2016 at 6:33pm

An active particle, often a colloid, or a nanoparticle, that propels itself through a fluid, often via some phoretic mechanism, or via some mechanical propulsion mechanism (the particles are then often called microswimmers). Generally, "active colloid" simply refers to a self-propelled colloid (and similarly with "active particle" in general).

"In the current miniaturization race towards small motors and engines, a rapidly expanding subdomain is the quest for autonomous swimmers, able to move in fluids which appear very viscous given the small length scales (low Reynolds number). Robotic microswimmers that generate surface distortions is an avenue (e.g. by mimicking sperms [1]), but it seems equally interesting to try to take advantage of physical phenomena that become predominant at small scales. Interfacial ‘phoretic’ effects (electrophoresis, thermophoresis, diffusiophoresis, [2]) by which the gradients of fields (electrostatic potential, temperature, concentration) drive the motion of colloid particles, are from this standpoint a natural avenue given the increased surface to volume ratio of smaller objects. "

Robotic microswimmers

Microscopic artificial swimmers

Phoretic swimmers

Designing phoretic micro- and nano-swimmers. A common design for phoretic swimmers is the Janus swimmer design

Self-diffusiophoresis

Self-electrophoresis

Self-thermophoresis


wiki

https://scholar.google.co.uk/scholar?hl=en&q=self-propelled+particle&btnG=&as_sdt=1%2C5&as_sdtp

Ramin's papers

Phoretic self-propulsion

Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products See Self-diffusiophoresis

Designing phoretic micro- and nano-swimmers See more at Designing phoretic micro- and nano-swimmers

Single phoretic swimmer stochastic dynamics

Self-Motile Colloidal Particles: From Directed Propulsion to Random Walk (experiment)

Anomalous Diffusion of Symmetric and Asymmetric Active Colloids

Stochastic dynamics of self-propelled colloids

Self-assembly of phoretic active colloids

Self-assembly of active colloidal molecules with dynamic function See Self-assembly of active colloids

Self-Assembly of Catalytically Active Colloidal Molecules: Tailoring Activity Through Surface Chemistry See Self-assembly of active colloids

Collective behaviour

Clusters, asters, and collective oscillations in chemotactic colloids See Collective behaviour of active colloids There is a lot of different regimes in their complicated mathematical models, and fuller understanding requires going through their models more carefully

Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids

Collective Behavior of Thermally Active Colloids See Collective behaviour of thermally active colloids. Others

Electrokinetic effects in catalytic platinum-insulator Janus swimmers. See Catalytic conductor-insulator Janus swimmer, Electrokinetic effects in catalytic conductor-insulator Janus swimmers. See also: Locomotion of electrocatalytic nanomotors due to reaction induced charge autoelectrophoresis and Self-electrophoresis

Boundaries can steer active Janus spheres See Boundary effects on the motion of active colloids

Self-thermophoresis

guillefix 3rd June 2016 at 4:15am

Collective Behavior of Thermally Active Colloids (pdf)

The motion of colloidal particles in a solution in the presence of an externally applied temperature gradient, which is known as thermophoresis or the Soret effect

Since such thermally active colloids would create temperature profiles around them that decay as1=r, in addition to causing them to self-propel, thermo-phoresis could provide a mechanism for them to interact with one another in a solution. The long-ranged nature of the intercolloidal thermophoretic interaction could lead to interesting collective behaviors.

Semigroup

guillefix 28th June 2016 at 4:44pm

In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation.

If they have an identity element, they are a Monoid

Sensitivity analysis

guillefix 28th June 2016 at 4:04pm

Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs.

Global Sensitivity Analysis: The Primer

A review on global sensitivity analysis methods

Making sense of global sensitivity analyses

Global Sensitivity Analysis

Separation process

guillefix 2nd July 2016 at 5:45pm

Sequence space

guillefix 15th July 2016 at 3:56am

A sequence space refers to the Set of all sequences of symbols, of a given length, where the symbols belong to an alphabet (another Set), which may be endowed with some more structure.

More precisely, a sequence is the set of all functions from an index set II to the alphabet set AA. This is the same as the set ×iIAi\times_{i \in I} A_i , where ×\times denotes Cartesian product. The sequence space can be notated AIA^I.

Two common examples of infinite sequence spaces are ANA^{\mathbb{N}}, where the index set is the naturals, and AZA^{\mathbb{Z}}, where the index set are the integers. Members of this latter example are also called bi-infinite sequences.

See this video

Topology

As the sequence set is constructed as a Cartesian product, we can endow it with the Product topology. The alphabet set, if finite can be endowed with the Discrete topology

Under this topology one can show that a set FF is closed iff there is a tree TT (a set of finite sequences, or strings) such that F=TF=T^\infty, where TT^\infty is the set of all paths through TT. See here for details and proof.

Measure on sequence spaces

Math 574, Lesson 1-5: Measures on Sequence Spaces

As the Cylinder sets generate the Product topology, which in turn generates a Borel sigma-algebra on our space, then if we find the algebra generated by the cylinder sets, this algebra will generate the Borel-sigma algebra, and by the Caratheodory extension theorem, by defining a Measure on the sets of this algebra, we define a unique measure on the Borel sigma-algebra.

In fact he shows that the set of finite unions of open cylinders (generated by the cylinder sets) themselves already form an algebra. This is because a finite intersection of open cylinders can be expressed as a finite union of another set of of open cylinders.

Then, it turns out that we can define a unique measure on this algebra if we define the measure on the open sets only, and thus we can define a unique measure on the Borel σ\sigma-algebra of the sequence space. These sets are of the form [σ][\sigma], where this is the set of all sequences that begin with string σ\sigma (these are called the basic open cylinder given by σ\sigma).

This measure is simply constructed by using the Measure additivity axiom of countable unions of disjoint (non-overlapping) sets (here applied of finite unions, as it is an algebra), and use some properties of open cylinders under intersections, which convert other arbitrary unions of open cylinders to unions of disjoint sets. This is proved from the property in the following lemma as well as the next lemma. This latter lemma uses μ([σ])=μ([σ0])+μ([σ1])\mu([\sigma]) = \mu([\sigma0])+\mu([\sigma1]), which of course follows from the additivity property of measures.

You also require some normalization condition, like μ([ϵ])=1\mu([\epsilon]) = 1 where ϵ\epsilon is the empty string, and thus [ϵ][\epsilon] is the set of all sequences that begin with the empty string, i.e. the full set.


See also Symbolic dynamics, Shift space, Entropy reduction. See book on permutation entropy.

Sequential dynamical system

guillefix 11th July 2016 at 12:53am

Set

guillefix 7th July 2016 at 6:51pm

A collection of objects/entities/things.

Cartesian product

Set theory

guillefix 29th March 2016 at 3:35pm

Sewing

guillefix 21st July 2016 at 12:54am

Sewing machine

guillefix 21st July 2016 at 12:53am

Shannon-Fano-Elias code and simplicity bias in GP maps

guillefix 26th April 2016 at 7:02pm

Argument from Shannon code before proof of coding theorem in Info theorem book.

constanc c is the description of the program to compute the probability distribution. You input that program, plus the description in the Shannon-Fano code to the Turing machine and it should be able to give you the string you want, so this constitutes a description of the string, and thus its length is an upper bound on the Kolmogorov complexity.

If c is sufficiently small, i.e. the map is simple enough, the bound on the Kolmogorov omplexity will be more stringent, and thus the coding theorem approaches an equality more.

This argument, however, only explains why if there is bias, in a simple map, one expects the bias to correlate with Kolmogorov complexity. But it doesn't explain why there should be bias in the first place.

My arguments using transducers try to explain both, but it'd be nice to see how these two arguments fit

Sigma-algebra

guillefix 14th July 2016 at 3:32pm

Given a set Ω\Omega, a σ\sigma-algebra on Ω\Omega, A\mathcal{A} is a subset of the Power set of Ω\Omega (A2Ω\mathcal{A} \subset 2^\Omega), s.t.

  1. AA is non-empty.
  2. AA is closed under complements. EAEcAE \in \mathcal{A} \Rightarrow E^c \in \mathcal{A}
  3. AA is closed under countable unions. E1,E2,...Ai=1EiAE_1, E_2, ... \in \mathcal{A} \Rightarrow \bigcup\limits_{i=1}^{\infty} E_i \in \mathcal{A}

(PP 1.2) Measure theory: Sigma-algebras

From these axioms, one can show that a sigma-algebra is closed under countable intersections too.

The sigma-algebra generated by C2ΩC \in 2^\Omega, written as σ(C)\sigma(C), is the "smallest" sigma-algebra containing CC. See here to see precise definition and why this always exits.

A common example is the Borel sigma-algebra.

A sigma-algebra can be generated by an algebra, as explained in the Caratheodory extension theorem

Signal processing

guillefix 1st July 2016 at 5:07pm

Similarity (Network theory)

guillefix 14th February 2016 at 9:24pm

See Measures and metrics for networks

How can we measure the "similarity" of two nodes (or edges, etc.)? Two main approaches. Two nodes may be:

  • structurally equivalent: if they share many of the same network neighbours.
  • regularly equivalent: have neighbours who are themselves similar.

Mathematical implementations of these ideas:

Structural equivalence:

Regular equivalence:

  • σ=αAσA\mathbf{\sigma}=\alpha \mathbf{A} \mathbf{\sigma} \mathbf{A} (+I+\mathbf{I}). Same as weighted sum over even paths that connect i and j

  • "Katz similarity". σ=αAσ+I\mathbf{\sigma}=\alpha \mathbf{A} \mathbf{\sigma}+\mathbf{I}. Weighted sum over all paths between i and j. Katz centrality of i is sum over the Katz similarity of i and all other nodes.

Fig Katz Sim

  • Other variants:
    • Variant of Katz similarity that divides by the degree of node kik_i in Fig Katz Sim. This is similar to the relation between PageRank and Katz centrality
    • The last term, instead of being just +I+\mathbf{I}, it is a general matrix, so we include prior similarities. Related to areas of machine learning and info retrieval that try to find similarities between things given some initial similarity data and some network or other data.

Another kind is automorphic equivalence See page 23 in here, as well as discussion of automorphism in Graph theory.

Similarity network

guillefix 31st January 2016 at 11:30pm

A Similarity network is one that expressed how similar entities (expressed as the nodes) are. The degree of similarity being the weight of the node.

The weight matrix AijA_{ij} represents level of similarity between entities ii and jj in the network. A similarity network is almost always complete (the only deviation from completeness is from nodes that can't be compared for some reason).

For example, if we have a matrix of votes, we can define AA as:

A=times i and j voted same waystotal number of times both i and j voted on same measureA=\frac{\text{times i and j voted same ways}}{\text{total number of times both i and j voted on same measure}}

similarity_in_networks.PNG

Simple contagion

guillefix 2nd June 2016 at 2:29am

A simple contagion, is a property that spreads between individuals in such a way that an individual can get infected by simple exposure to another infected individual (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses). Often the model lives on a network that determines which individuals (nodes) interact (edges).

Compartmental models are those in which the individuals can be on any of a number of states (often "susceptible", "infected", or "recovered"), and there are certain rules for the contagion.

SI model

a.k.a susceptible-infected model. Just two states, "susceptible" and "infected". Susceptible individuals can get infected by infected individuals.

Fully mixed SI model

Assumes every individual has an equal probability (per unit time, i.e. rate) of meeting any other individual. A description is then made using a pair of Rate equations:

dXdt=βSXn\frac{d X}{dt} = \beta \frac{S X}{n} or dxdt=βsx\frac{dx}{dt}= \beta s x

where SS and XX are the average number of susceptible and infected individuals, respectively, in a population of nn individuals, and s=S/ns=S/n and x=X/nx=X/n. Furthermore, I+S=nI+S=n is unchanged in time, so s=1xs=1-x, and the above equation is equivalent to:

dxdt=β(1x)x\frac{dx}{dt}= \beta (1-x) x

which is the logistic growth equation.

SIR model

a.k.a susceptible-infected-recovered model or susceptible-infected-removed model. Adds the possibility of recovery (and subsequent immunity). Three states: "susceptible", "infected", and "recovered". Susceptible individuals can get infected by infected individuals. Individuals can recover after some time, and then become immune to new infections.

The model can also be applied to when the third state corresponds to a dead individual, as in this case the individual also doesn't participate in the network of possible infectious transmissions (though there are some subtleties in some cases, see note in page 632 on Newman's book). For this reason the R sometimes refer to "removed", encompassing both cases.

SIS model

SIRS model

Simplicity bias

guillefix 21st July 2016 at 3:18pm

Simplicity bias is a bias observed in many GP maps (see Bias in GP maps), and in many Complex systems (which can often be looked as GP maps). Simplicity is defined as low complexity.

Simplicity bias in discrete systems

Simplicity bias in finite-state transducers

Simplicity bias in continuous systems

See MMathPhys oral presentation

Simplicity bias in continuous systems

guillefix 23rd June 2016 at 10:24pm

See Xmorphia system in Pattern formation

Simplicity bias in discrete systems

guillefix 23rd June 2016 at 10:25pm

Simplicity bias

Discrete dynamical systems

Simplicity bias in finite state transducers

Simplicity bias in Boolean networks?

See Activities and Sensitivities in Boolean Network Models

Simplicity bias in Boolean threshold networks

Simplicity bias in other discrete systems

Discretized differential equations

Simplicity bias in finite-state transducers

guillefix 19th July 2016 at 6:21pm

An example of Simplicity bias in discrete systems

See Random automata and Evolving automata

Numerical experiments on the simplicity bias in finite-state transducers

On the theory/analysis side, I've been thinking about two questions:

  • Understanding the simplicity bias graph, for a particular FST, given its structure.
  • Understanding the statistical/average properties of random FSTs.

Ideas for understanding the simplicity bias in finite state transducers

To have sufficiently high bias, we need a small non-coding loop.

To have varied output, we need loops outside the non-coding regions. This is so that the time spent in non-coding regions can vary for different outputs.

The slope of the designability/complexity plot corresponds approximately to the Topological entropy of non-coding region. Computed using Determinant of a graph. However, there's also a factor due to the conversion between {KC complexity} and {number of bits spent in non-coding region}. For first FST below, for instance, by computing KC for strings like 10000000000000000...10000000000000000... and 10011110110000000...10011110110000000..., I found that KC2.7mKC \propto 2.7 m. Then from topological entropy, which is log232\frac{\log_2{3}}{2}, we find a(log23/2)/2.70.29a \approx (\log_2{3}/2)/2.7 \approx 0.29, which is consistent with what I found from the graph, approximately (log2100)/230.29(\log_2{100})/23 \approx 0.29.

LZ<nmlog2(nm)+cLZ < \frac{n-m}{\log_2{(n-m)}} + c

KCLZlog2n<nmlog2(nm)log2n+cKC \approx LZ\log_2{n}< \frac{n-m}{\log_2{(n-m)}}\log_2{n} + c'

P2am+bP \approx 2^{am+b}

log2Pam+b\log_2{P} \approx am+b

mlog2Pbam \approx \frac{\log_2{P} -b}{a}

KC<nlog2Pbalog2(nlog2Pba)log2n+cKC < \frac{n-\frac{\log_2{P} -b}{a}}{\log_2{(n-\frac{\log_2{P} -b}{a})}}\log_2{n} + c'

Now, this PP refers to the frequency, which is between 1012×10510710^{12} \times 10^{-5} \approx 10^7 and 1012×10310910^{12} \times 10^{-3} \approx 10^9 (24010122^{40} \approx 10^{12})

log2(40)/log2(40log2(10n)/0.79)\log_2(40)/\log_2(40-\log_2(10^n)/0.79)

If we average this quantity for n=7,8,9n=7,8,9, we get (4.8+2+1.55)/32.8(4.8+2+1.55)/3 \approx 2.8, which is close to the 2.72.7 found from estimates above.

See this paper about maximum LZ complexity, which goes like l/logll/\log{l} where ll is length of string.

See here for desmos graph.


Examples of finite-state transducers and their simplicity bias


See related stuff in Descriptional complexity

Finite state channel

Information theoryCoding theoryAlgorithmic information theory

Ergodic theoryTopological dynamicsTopological entropy

More resources in simplicity bias in FSTs

Single microswimmer hydrodynamics: applications

guillefix 3rd June 2016 at 12:14am

See Active matter for background.

  • bacteria enhance diffusion as a result of the flow fields they produce

The path taken by a tracer will depend on the detailed spatial and temporal correlations of the velocity. Numerical simulations were conducted in Fluid transport by individual microswimmers. The striking feature of the tracer trajectories is their loop-like character , a consequence of the angular dependence of the flow field. Mathematically, it is because all terms in the multipole expansion, except the Stokelet are exact derivatives. The way this works:

Consider a tracer whose velocity (in LAB frame) is much smaller than the swimmer's velocity. Then, in the rest frame of the swimmer, the tracer follows a path which is approximately straight, and parallel to the swimmer's motion (in Lab frame). It's velocity deviating from the straight-line motion is given by the dipolar field. And so it's total displacement in the LAB frame (total displacement in swimmer's rest frame, relative to straight line path) is given by integrating the dipolar field approximately along the straight line from -\infty to \infty. However, because Gijxk(r)Djk\frac{\partial G_{ij}}{\partial x_k} (\vec{r}) D_{jk} is a total derivative (DjkD_{jk} is constant), and GijG_{ij} is 00 at -\infty and \infty, the total displacement is 00.
The reason we need the tracer's velocity to be much smaller than the swimmer's velocity, for the above argument is that the total displacement is given by the integral of the velocity with respect to time, i.e. v(t)dt=Cv(t)dsV+v(t)\int \vec{v}(t) dt = \int_C \vec{v}(t) \frac{ds}{|\vec{V}+\vec{v}(t)|} , where v(t)\vec{v}(t) is the dipolar velocity field of the swimmer (i.e. velocity field in its rest frame, minus the overall constant, V\vec{V}). V\vec{V} is the swimmer's velocity in the LAB frame. V+v(t)\vec{V}+\vec{v}(t) is thus the total velocity field in swimmer's rest frame. dsds is a distance element along the path CC that the particle traces. V+v(t)|\vec{V}+\vec{v}(t)| is the instantaneous speed of the particle along this path, so that dt=dsV+v(t)dt = \frac{ds}{|\vec{V}+\vec{v}(t)|}. Now, if Vv(t)|\vec{V}| \gg |\vec{v}(t)|, dtdsVdt \approx \frac{ds}{|\vec{V}|}, so that the integral is approximately a line integral of v(t)\vec{v}(t) along CC. But, when we take v(t)\vec{v}(t) into account, when v(t)\vec{v}(t) is parallel to V\vec{V}, V+v(t)|\vec{V}+\vec{v}(t)| is larger, and the contribution in the integral is less; when v(t)\vec{v}(t) is anti-parallel to V\vec{V}, V+v(t)|\vec{V}+\vec{v}(t)| is smaller, and the contribution in the integral is more. This means the particle has a displacement bias towards the direction of motion of the swimmer. This is called entrainment.
But, Why do the faraway tracers have a net negative displacement?

The entrainment effect is an example of Darwin drift. The Darwin drift volume has also been calculated for these active swimmers

Contribution to diffusion

We can estimate the contribution to diffusion from the entrainment effect. We know that the Diffusion coefficient can be expressed in 3D as:

Dentr=Δx26tD_{\text{entr}} = \frac{\langle \Delta x^2 \rangle}{6t}

The entrainment length (Darwin drift) is of order aa (the size of the swimmer), when close (within distance aa) to the swimmer. Thus, Δx2a2\langle \Delta x^2 \rangle \sim a^2, whenever there is a swimmer within a volume a3\sim a^3. If there are nn swimmers per unit volume, the probability that a swimmer is in a given region of volumea3 a^3 is approximately na3n a^3. Therefore, Δx2a5n\langle \Delta x^2 \rangle \sim a^5 n. Now the characteristic time step tV/at \sim V/a, is the time scale that the swimmer travelling at speed VV takes to traverse the distance aa throughout which the swimmer interacts with the tracer particle. Therefore,

Dentr16a4nVD_{\text{entr}} \approx \frac{1}{6}a^4 n V

There is also a contribution to diffusion from the random reorientations that real bacteria perform at approximately regular intervals (in their run and tumble behaviour). Is the contribution to the diffusion constant from random reorientations, or finite run lengths? I think the former, due to the disappearance of λ\lambda, the run length from the expression

Drr=4π3(κV)3nVD_{\text{rr}} = \frac{4\pi}{3}(\frac{\kappa}{V})^3 nV

where κ\kappa is a measure of the swimmer's dipole strength.

Because the addition of variances (Δx2\langle \Delta x^2 \rangle) for independent processes, we then have that the total diffusion coefficient is approximately the sum:

D=Drr+Dentr+DthermalD = D_{\text{rr}} + D_{\text{entr}} + D_{\text{thermal}}

For different kinds of systems, some of these diffusions coefficients will dominate.

Swimmers in Poiseuille flow

Zottl and Stark paper. Swimmer equations of motion, for swimmer in background flow vf\mathbf{v}_f:

ddtr=v0e^+vf\frac{d}{dt}\mathbf{r} = v_0 \hat{\mathbf{e}}+ \mathbf{v}_f

ddte^=12Ωf×e^ \frac{d}{dt} \hat{\mathbf{e}}= \frac{1}{2}\mathbf{\Omega}_f \times \hat{\mathbf{e}}

where \hat{\mathbf{e}} is the swimming direction of the point swimmer. In the case of Poiseuille flow, the equation determining, the angle of the swimmer follows the nonlinear pendulum equation (with sin\sin).

When swimming upstream, any deviation for the centre line is subject to a restoring torque from the vorticity and hence the swimmer trajectory oscillates around the centre of the channel. Swimming downstream any perturbation about the centre line is amplified by the vorticity , and the swimmer tumbles in the flow. For sufficiently large velocities, it continues to tumble down-stream, otherwise it reaches the walls and and the simple theory must be supplemented by additional physics.

One can also describe the motion of the swimmer in simple shear flow, and when there is tendency to swim, on average, in a particular direction, "-taxis" For instance,

  • towards gravity, gravitaxis
  • towards light, phototaxis
  • following a chemical gradient, chemotaxis.

One can use these ideas, with shear, and gravitaxis (together often termed gyrotaxis), to explain, for instance, the formation of thin layers of phytoplankton in the oceans.

Surfaces

Why micro-organisms often accumulate at surfaces

First note, that a simple self-propelled rod, or sphere, when it eventually hits a surface, will then tend to move parallel to it, and only scape it, when a rotational fluctuation changes its direction enough to swim away from it.

However, there is a less trivial effect, due to hydrodynamic interactions with the wall. These can be taken into account, because Stokes equations are linear, by considering an image swimmer at a position corresponding to the reflection of the swimmer on the wall, and pointing in the opposite direction (so as to satisfy the boundary condition of no normal flow at a free boundary (one that can slip; Like what? I mean, say a liquid-gas interface doesn't satisfy either no-slip or no normal flow, no? http://onlinelibrary.wiley.com/doi/10.1002/cpa.3160190405/abstract Its no normal stress and no tangential stress.)). The extra terms needed to satisfy the no-slip condition are more complicated, and form the Blake tensor. But doesn't the reversed mirror-image Stokelet cancel both the normal and tangential components of the velocity at the boundary?? No, because the Stokelet doesn't have the right symmetry, i think

Like what? I mean, say a liquid-gas interface doesn't satisfy either no-slip or no normal flow, no? http://onlinelibrary.wiley.com/doi/10.1002/cpa.3160190405/abstract Its no normal stress and no tangential stress.

However hydrodynamic interactions are not the only contribution. For rotating swimmers, like E. Coli, the effect the wall drag on torque is important; it makes the swimmer move in circles near the wall. See more at Physics of microswimmers—single particle motion and collective behavior: a review.

Singular perturbations in algebraic equations

guillefix 27th April 2016 at 9:19pm

When limit problem (ϵ=0\epsilon =0) differs in an important way from the limit ϵ0\epsilon \rightarrow 0). For example, a root is lost, or a derivative is lost in a DE.

Problems that are not singular, are called regular

For algebraic equations, often when a root is lost, it's because it goes to \infty as ϵ0\epsilon \rightarrow 0.

It's first term in the expansion may be then 1ϵ\frac{1}{\epsilon}, for example.

For the iterative method, different functions gg may be needed to find different perturbed roots of an algebraic equation, so that condition g(x;ϵ)0g'(x^*; \epsilon) \rightarrow 0 as ϵ0\epsilon \rightarrow 0 is satisfied.

Regularization method

Scale variables so that the problem becomes regular.

For instance, if first term in the expansion is 1ϵ\frac{1}{\epsilon}, rescale x=X/ϵx =X/\epsilon.

Indeed, the problem of finding the correct starting point for an expansion, is equivalent to the problem of finding a suitable scaling to regularize the singular problem.

Finding the right scaling

Systematic approach:general rescaling

Let x=δ(ϵ)Xx=\delta(\epsilon)X, with XX strictly of order 11 as ϵ0\epsilon \rightarrow 0

Vary δ\delta from small to large to identify dominant balances in which at least two terms are of the same order of magnitude as ϵ0\epsilon \rightarrow 0, while others are smaller. Scalings that result on dominant balances are called distinguished limits

Alternative approach: pairwise comparison

quicker when there are a small number of terms. Try to create dominant balance between terms pairwise, and see if you can get it, consistently. That way you can find the dominant limits.

Sitting

guillefix 5th July 2016 at 3:59am

Sitting is a basic human resting position. The body weight is supported primarily by the buttocks in contact with the ground or a horizontal object such as a chair seat. The torso is more or less upright. Sitting for much of the day may pose significant health risks, and people who sit regularly for prolonged periods have higher mortality rates than those who do not.

Sloppy systems

guillefix 27th June 2016 at 10:45pm

Sloppy is the term used to describe a class of complex models exhibiting large parameter uncertainty when fit to data.

the Fisher information matrix (FIM) can be used to estimate the uncer- tainty in each parameter in our model.


Sloppy Models

"Many models in biology, engineering and physics have a very large number of parameters. Often many of these are only known approximately. Moreover, in John von Neuman's famous quip \with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." suggests that only a small set of these parameters are actually relevant? Could there be a fundamental theory of these Complex systems that allows us to work out what the key parameters are?"

Perspective: Sloppiness and emergent theories in physics, biology, and beyond publication

Parameter Space Compression Underlies Emergent Theories and Predictive Models

Universally Sloppy Parameter Sensitivities in Systems Biology Models

Sloppy-model universality class and the Vandermonde matrix

James Sethna: Sloppy models and how science works (video)

Smale horseshoe map

guillefix 8th May 2016 at 10:02pm

A Smale horseshoe map is any member of a class of chaotic maps of the square into itself, of the kind introduced by Stephen Smale in 1967 while studying the behavior of the orbits of the van der Pol oscillator.

HORSE SHOES AND HOMOCLINIC TANGLE

I read about homclinic tangles when doing the nonlinear systems miniproject on the Duffing oscillator see Thompson and Stewart. Nonlinear dynamics and chaos and here. Whenever a pair of invariant sets (one outgoing and one incoming) of from some saddle fixed point cross in a Poincare plane (they can cross, as they don't represent trajectories), the points in the outgoing set must go outwards in the outgoing set, but the intersection point must also go inward in the the ingoing set. This causes the outgoing set to cross the ingoing set at ever decreasing steps, and causes a shape like that of the Smale horseshoe. This is hard to explain without pictures..

Smale Horseshoe Map

http://www.scholarpedia.org/article/Smale_horseshoe

small_world_model.png

guillefix 31st January 2016 at 9:17pm

small_world_model2.png

guillefix 31st January 2016 at 11:40pm

Small-world model (Network theory)

guillefix 29th March 2016 at 4:42pm

Random graph models capture well the small-world properties of real networks (see Large-scale structure of networks). The mean geodesic distance grows like lnn/lnc\ln{n}/\ln{c}, that is, much more slowly than nn, the number of nodes.

However, they don't capture the high transitivity (i.e. high clustering coefficient) of real world networks (where nodes which are neighbours of the same node are more likely to be neighbours of each other, specially true in social networks). One can easily construct models with high transitivity, like the triangular lattice, or the "circle model" where each node is connected to cc closest nodes, but these don't have small-world properties.

The small-world model is a hybrid of the two, so that it displays both high transitivity and short path lengths. It was proposed in 1998 by Watts and Strogatz. The model (Watts-Strogatz version) works by rewiring existing edges in a random fashion, becoming so-called shortcuts. Another version (Newman-Watts), that is easier to analyze analytically, doesn't rewire edges, but simply adds them (often, we add one, with probability pp, per edge in the circle model network).

Degree distribution

It is a Poisson distribution (in the limit of large nn I think, right?), just like the random graph. However, it is cutoff at cc, as we don't remove the original circle-model edges.

Clustering coefficient

Compute by counting triangles, and triads..

Mean shortest path

No exact formula known, but we know scaling of the mean shortest distance, ll:

lncf(ncp)l\approx \frac{n}{c}f(ncp).

which comes from scaling argument...Approximate form for ff can be found by mean-field methods. ---—

One can see that there is a wide range of values for pp so that the network exhibits both high clustering and small mean shortest distance, showing that these are not at all incompatible.

The conclusion from all this is that:

A network doesn't need that many shortcuts to have scaling of the geodesic distance that is O(lnN)O(\ln{N}), instead of O(N)O(N), i.e. to be a "small-world"

Simulating in Matlab

This page explains how to simulate the code in Matlab.

Social & cultural innovation

guillefix 1st April 2016 at 10:12pm

Social dynamics

guillefix 2nd June 2016 at 1:53am

Social sciences

guillefix 8th July 2016 at 1:35am

Study of societies.

Societies are complex systems of complex beings; in particular animals, and humans. The behaviour of the individual beings is studied in Behavioural sciences

https://en.wikipedia.org/wiki/Social_science

See https://en.wikipedia.org/wiki/Social_anthropology for human societies.

Social science and engineering is included here.

Wikipedia's contents: Society and social sciences

Social system

guillefix 23rd May 2016 at 11:32pm

Society

guillefix 1st July 2016 at 11:12pm

Society & sociology

guillefix 1st July 2016 at 11:10pm

https://en.wikipedia.org/wiki/Community

Plotch (watch+play) this: http://ncase.me/polygons/

Sociocyberneering

Societal structure (characteristic of Civilization)

societal organization:

Sociobiology

guillefix 8th April 2016 at 6:04pm

Soft materials

guillefix 3rd June 2016 at 12:14am

Soft matter physics

guillefix 11th June 2016 at 1:57am

Soft condensed matter (often abbreviated to soft matter) is basically all forms of condensed matter (i.e. many particles more or less bound together (for e.g., by Intermolecular forces)) that isn't a solid, so that it has features that are easily deformable at low energies (room thermal energies). This includes polymers, Liquid crystals, complex fluids, Granular material, Foams, Emulsions, Colloids, and many kinds of mixtures, that form mesoscopic structures. Also a lot of stuff in life falls under the "soft" category. I like it precisely because of its richness.

Wiki: https://en.wikipedia.org/wiki/Soft_matter

Typial features

Statistical physics is important, in particular interplay of energy and entropy, reflected in the free energy.

For systems of many particles, one uses Statistical field theory. Though one can further simplify by ignoring fluctuations, using a Mean field theory.

Soft materials

A fruitful way of studying phases, is to study the phase transitions between them.

Universality, coarse-graining, renormalization group. Percolation


Introduction to soft matter physics - 1 by David Pine

Thomas Speck

Software

guillefix 9th April 2016 at 1:13pm

Keylogger

Simple keylogger

To open the keylogger log (which is very long), get part of the log only, using tail: http://www.computerhope.com/unix/utail.htm

cd /var/log/

tail -c 10000 skeylogger.log

How to parse the output

https://gist.github.com/kelly-ry4n/44822005a02d9ff115c12e4075adb256

https://www.facebook.com/groups/hackathonhackers/permalink/1232180336837449/?comment_id=1232200533502096

Software engineering

guillefix 30th June 2016 at 1:16am

Software for deep learning

guillefix 9th July 2016 at 5:24am

Software validation

guillefix 4th April 2016 at 11:36pm

Solar System

guillefix 5th July 2016 at 3:32am

The Solar System is the Planetary system containing Planet Earth and the Sun

Solarpunk

guillefix 3rd April 2016 at 2:07pm

Solid

guillefix 1st July 2016 at 11:19pm

A phase of matter characterized by elastic resistance against deformation. See Condensed matter physics.

Solid material

guillefix 1st July 2016 at 11:19pm

A solid material is a Material that is Solid at Room temperature.

Solid mechanics

guillefix 2nd May 2016 at 12:51am

Solid-state physics

guillefix 1st May 2016 at 8:31pm

See Simon's solid-state physics book, and his Oxford lectures (recorded).

How many watermelons per unit cell?

watermelons = atoms.. BTW picture shown isn't really a unit cell, but the same method of counting atoms is used for actual unit cells.

Solution (Chemistry)

guillefix 2nd July 2016 at 5:46pm

A Dispersion (Chemistry) where the dispersed phase has particles in which all dimensions are smaller than approximately one nanometer (so that they aren't colloidal).

https://en.wikipedia.org/wiki/Solution

a solute is a substance dissolved in another substance, known as a solvent

Regular solution model

See Thermodynamics of liquid-liquid unmixing

See Chemical potential

The solution-diffusion model: a review

Sorting algorithms

guillefix 31st January 2016 at 12:36am

Source coding theorem

guillefix 1st July 2016 at 2:42pm

The average length of a code is bounded below by the entropy of the random variable that models your data.

See Data compression

Source-channel separation theorem

guillefix 2nd July 2016 at 2:23am

Space innovation

guillefix 20th May 2016 at 3:59am

Breakthrough StarShot

Breakthrough Starshot aims to demonstrate proof of concept for ultra-fast light-driven nanocrafts, and lay the foundations for a first launch to Alpha Centauri within the next generation. Along the way, the project could generate important supplementary benefits to astronomy, including solar system exploration and detection of Earth-crossing asteroids. Engineering challenges

http://www.transplanetary.com/

Space science

guillefix 8th July 2016 at 1:33am

Spanning cluster-avoiding process

guillefix 13th June 2016 at 7:11pm

An spanning cluster-avoiding process (SCA) is an Explosive percolation model based on classifying bonds between those that facilitate the creation of the spanning-cluster, and those that don't, and preferentially selecting those that don't. They are similar to Achlioptas processes (mm-edge processes). However, they don't require the candidate edges to be chosen at random between any pair of nodes, and instead the candidate edges can belong to a predetermined underlying network, common a hypercubic lattice. They are capable of showing discontinuous transitions, for certain choices of the number of candidate edges chosen per step

The most common spanning cluster-avoiding process (introduced here) starts by considering a finite hypercubic lattice Zd\mathbb{Z}^d in dd dimensions of size NN and unoccupied bonds. Then, inspired by the best-of-m model (see Tricritical Point in Explosive Percolation), the rule of the mode is as follows as follows:

  1. At each time step t, number of m unoccupied bonds are chosen randomly and classified into two types: bridge and nonbridge bonds. Bridge bonds are those that upon occupation a spanning cluster is formed.
  2. SCA model avoids bridge bonds to be occupied, and thus one of the nonbridge bonds is randomly selected and occupied. If the m potential bonds are all bridge bonds, then one of them is randomly chosen and occupied.
  3. Once a spanning cluster is formed, restrictions are no longer imposed on the occupation of bonds.

Getting the Jump on Explosive Percolation

Avoiding a Spanning Cluster in Percolation Models

These models were introduced to clarify the order of the transition in explosive percolation processes in Euclidean lattices, which had been studied numerically before: Explosive Growth in Biased Dynamic Percolation on Two-Dimensional Regular Lattice NetworksScaling behavior of explosive percolation on the square lattice.

Extensive numerical simulations and theoretical results have shown that the explosive transition in SCA model in the thermodynamic limit, can be either discontinuous or continuous depending on dimension the number of potential bonds mm (see here, here, and Two Types of Discontinuous Percolation Transitions in Cluster Merging Processes).

Spatial networks

guillefix 1st June 2016 at 7:33pm

A spatial networks is a network that is embedded in some space. This affects our choices of models for random graphs. An example is the Planar network.

Explicitly embedded in space vs. consequences of (implicit) system being embedded in space. For example, network of borders of countries vs. friendship network..

Barthelemy's long review (my Kami file, not sure if it'll work: here) Otherwise link to original

Lecture notes

  • Euler's formula, NE+F=2N - E + F=2 already gives many constraints to planar graphs
  • Voronoi tesellations
  • Degree distributions (see degree can be heterogeneous (like for airline or internet networks). However, more strong spatial constraints can make degree distributions very homogeneous, or even highly peaked, like for road networks. (Dual graph may still have heterogeneous pkp_k).
  • Spatial networks often have high clustering coefficient (see transitivity), due to closer nodes being likely connected among themselves.
  • Spatial constraints usually imply neutral assortativity (i.e. neither positive or negative assortative mixing).
  • Spectral graph theory has applications to things like stationary states of a random walk and synchronization properties.
  • In a lattice, betweeness centrality depends mostly on spatial position, and is peaked at the barycentre. Shortcuts create anomalies in this pattern.
  • For weighted spatial networks, an important measure is the correlation of strength or distance strength* (\sim geometry) and degree (\sim topology) . * Distance strength is the sum of the Euclidean distances between a node and its neighbours.
  • α\alpha and γ\gamma indices measure things like density, or "meshedness". Also have "ringness".
  • Detour index is ratio of route distance (distance actually following the network) and Euclidean ("straight line") distance between two nodes. The accesibility is the average of the the detour index for paths leading to a node, measures how easy it is to go to that node. Related to straightness centrality. See this paper.
  • Cost and efficiency optimization can determine the structure of a network. Cost can be associated with the total length of the network (compared with the minimum spanning tree), and efficiency may refer to transport performance (measures mean shortest distance), or fault tolerance (probability of staying connected when removing a node, or an edge).
  • Community detection appropriate to spatial networks is an interesting problem. Paper that may be interesting according to review, to try to apply to spatial networks.
  • Motifs, subgraphs which are found more often than would be expected (by null random graph model).

Empirical observation

Two kinds of spatial network topologies:

  • Planar networks
  • Non-planar spatial networks

Measure strength, clustering coefficients, and betweeness centrality, and their correlations with degree. Also assortativity. Assortatitvity is flat (i.e no degree-degree correlations) because while often hubs want to preferentially connect to hubs, they can't if spatial constraints don't allow such long (in average) links.

  • Larger clustering because close nodes will connect among themselves more.

Anomalies in betweeness centrality-kk correlation. Fluctuations (for given degree) because of competition of spatial constraints (that want central nodes close to the spatial network barycenter) and degree.

Topology-traffic correlations. Nonlinear correlations between non-topological quantities (like strength and distance strength) and topological quantitiy (degree). A superlinear relation of the strength and degree indicates that links connecting to central (high-degree) nodes carry more traffic than average. Spatial constraints tend to cause this because they tend to reduce the number of high node hubs (as long links are costly). However, if the traffic stays the same, it must be distributed among the lesser-degree hubs, and so the increase of traffic with degree is faster. See page 45 of review. This is seen in strength-driven preferential attachment with spatial selection, in airline networks (and the Newman model that models them), in OTT (optimal traffic tree),

Real-world networks

  • Transportation networks
  • Infrastructure networks
  • Mobility network. Analyzed using origin-destination matrix
  • Neural networks

Models for spatial networks

Geometrical random graphs

  • simplest geometric random graph. Nodes randomly distributed on plane, and they link if they are close enough.
  • Random geometric graph in hyperbolic space. Gives power law deg. dist. Related to the structure of the internet !?
  • Scale free network on a lattice. See the paper. Basically given degrees (like configuration model right?), we then connect nodes on a lattice, giving preference to neighbours, or closer nodes.
  • Apollonian networks

Spatial generalizations of the Erdos-Renyi graph. Random graph

  • Planar Erdos-Renyi graph
  • Hidden variable model for spatial networks. ER graph but probability of connecting depends on fitness, and on distance
  • Waxman model. Nodes uniformly distributed in space and connect with probabilty depending on distance (exponentially).

Spatial small worlds. The Watts-Strogatz model in a d-dimensional lattice, and where the probability of making a shortcut may depend on its length (spatial constraint).

Spatial growth models.

  • Preferential attachment with distance selection
  • Growth and local optimization

Optimization of spatial networks

  • Hubs-and-spokes structure appears in either
    • the hub location problem, where the cost of paths is basically given a priori.
    • or when optimizing both the total length and the travelling time (and the waiting time matters). This is the Newman et al model.
  • From the minimum spanning tree to the shortest path tree
    • The Steiner problem
  • Adding two antagonistic quantities

Streets tree networks and urban growth: Optimal geometry for quickest access between a finite-size volume and one point

The geometric form of the tree network is deduced from a single mechanism. The discovery that the shape of a heat-generating volume can be optimized to minimize the thermal resistance between the volume and a point heat sink, is used to solve the kinematics problem of minimizing the time of travel between a volume (or area) and one point. The optimal path is constructed by covering the volume with a sequence of volume sizes (building blocks), which starts with the smallest size and continues with stepwise larger sizes (assemblies). Optimized in each building block is the overall shape and the angle between constituents. The speed of travel may vary from one assembly size to the next, however, the lowest speed is used to reach the infinity of points located in the smallest volume elements. The volume-to-point path that results is a tree network. A single design principle – the geometric optimization of volume-to-point access – determines all the features of the tree network.

Mathematics and morphogenesis of cities: A geometrical approach


Extracting Hidden Hierarchies in Complex Spatial Networks

See notes


http://named-data.net/wp-content/uploads/2010HyperbolicGeometry.pdf

Hyperbolic geometry

http://arxiv.org/pdf/math-ph/0112039.pdf

http://www.math.miami.edu/~larsa/MTH551/hyplect.pdf

http://www.alcyone.com/max/reference/maths/hyperbolic.html

http://math.oregonstate.edu/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/surface/surface.html

http://eprints.soton.ac.uk/172655/1/2009_PIRT_Barrett.pdf

https://www.math.brown.edu/~rkenyon/papers/cannon.pdf

http://www.springer.com/gb/book/9789048186365

Spatial growth of real-world networks


Man-made networks

Evolving Transportation Networks

Measuring the Structure of Road Networks

Exploring the patterns and evolution of self-organized urban street networks through modeling

Time Evolution of Road Networks

Physical networks

Granular materials

Polymer networks (blue phases..)

Fiber networks can amplify stress

Biological networks

Roots, vascularity, leaf venation, physarum networks, neural networks...

Geometrical graphs

https://en.wikipedia.org/wiki/Outerplanar_graph

https://en.wikipedia.org/wiki/Godfried_Toussaint

Toussaint hierarchy of different kinds of geometric planar graphs. Has been applied to physarum networks

Some geometrical and spatial networks examples

How $$\beta$$-skeletons loose their edges

Special Relativity

guillefix 21st January 2016 at 8:53pm

Spectral methods

guillefix 19th February 2016 at 6:35pm

Fourier spectral discretization

Finite difference formulas create dispersion effects not found in original PDE. Similar effects seen in crystals, which are discrete by nature.

One way to avoid these, is to let the order of the finite difference formula tend to infinity. We then get spectral methods. Simplest favours are:

  • Periodic domains: Fourier spectral methods.
  • Non-periodic domains: Chebysev spectral methods.

In the limit of infinite order, those finite differences approach the infinite Laurent matrix (or Laurent operator).

Suppose we have the values of the solution function vv on our discrete periodic grid. Spectral approximations to vv', ww is given by:

w=Dvw=Dv

where D here is the spectral differentiation matrix.

The fundamental idea of spectral collocation methods is :

1. Interpolate the data by a global interpolant (for example, a periodic trigonometric polynomial):

p(x)=j=N/2N/2ajeijxp(x) = \sum_{j=-N/2}^{N/2} a_j e^{ijx}

2. Differentiate p(x)p(x) and evaluate at the grid points.

From properties of exponential, another way to compute the 2nd Fourier spectral derivative is:

1. Given uu, compute its DFT (discrete Fourier transform) U = fft(u) (using MATLAB notation for fast Fourier transform (FFT), an efficient algorithm to compute DFT).

2. Multiply by j2-j^2: W(j)=j2UjW(j) = -j^2 U_j.

3. Take the inverse transform w = ifft(u).

Similar ideas lead to the one-way wave equation.

Fill details below from lecture 10, when it's published (https://www0.maths.ox.ac.uk/courses/course/28839, and vid).

Fourier series

...

Quadrature: trapezoidal rule \Leftrightarrow integrating the interpolant

Rootfinding: via eigenvalues of companion matrix

Laurent series

...

Chebysev series

...

Spiking neural network

guillefix 23rd June 2016 at 10:32pm

A more realistic kind of Artificial neural network. It is a model that is the basis for the design of Neuromorphic computing systems.

Spin glass

guillefix 13th July 2016 at 3:55pm

aka Sherington-Kirkpatrick model

Disordered version of the Ising model, and corresponding magnetic materials showing disordered phases.

Spin glass models

A short course on mean field spin glasses

solvable model of a spin-glass

See also Ising model..

See also Artificial neural network (near bottom) for some cool applications

Long-Distance Behaviour of Correlation Functions in Disordered Systems

Scale Invariance and Self-averaging in disordered systems

Mechanisms underlying spin glass behaviour

Direct moment-moment coupling is too weak to account for the observed behaviour.

In a metal such as copper, the outermost atomic electrons leave the individual copper atoms and more or less freely roam through the metal (thus becoming conduction electrons). So, in an alloy like copper manganese, it might be suspected that these conduction electrons are playing some role. And that suspicion is correct.

Electron spins have two properties that are crucial to their mediation role:

  • The first is that electrons carry their own intrinsic magnetic moments. This means that the magnetic moments of the conduction electrons can interact with those of the localized moments on the manganese atoms, for example, through mutual spin flips as an electron passes by the manganese moment.
  • The second is that, as quantum mechanical objects, electrons travel through metals as waves, meaning that, like all waves, electrons can exhibit diffraction and interference. As conduction electrons zip past and interact with the localized moment, one gets concentric spheres, centered on the localized manganese moment, of conduction electron spins polarized parallel and antiparallel to the localized moment. These bands are known as Ruderman-Kittel-Kasuya-Yosida (RKKY) oscillations

Spin glass materials

  • Dilute magnetic alloy
  • Insulator spin glasses. Like:
    • europium strontium sulfide (Eux Sr1−x S), magnetic impurity Europium is substituted randomly for nonmagnetic Strontium, with the fraction x of europium
    • lithium holmium yttrium fluoride (LiHo0.167 Y0.833 F4 ), in which holmium is the magnetic ion.

Static features of spin glasses

Four properties constitute the most prominent static features of materials we have come to call Spin glasses.

  • a cusp in the magnetic susceptibility,
  • a rounded maximum but no discontinuities in the specific heat,
  • spin freezing below temperature TfT_f , and
  • an absence of spatial long-range order

Dynamics of spin glasses

I.e. non-equilibrium properties.

  • "Remanence" behaviour
  • Memory effects.

https://en.wikipedia.org/wiki/Spin_glass

http://www.birs.ca/events/2014/5-day-workshops/14w5082/videos

Courses - F. Guerra “Equilibrium and off equilibrium properties of ferromagnetic...”

Statistical mechanics of spin glasses and neural networks 8\3\16 no sound :(

Spin Glasses and Complexity

Spindle (Cell biology)

guillefix 10th May 2016 at 8:20pm

The spindle, or spindle apparatus is an structure that segregates chromosomes during cell division, and is formed by Microtubules, Molecular motors, and hundreds of other proteins. The spindle self-organizes during division process.

(https://en.wikipedia.org/wiki/Spindle_apparatus)

For the frog Xenopus laevis, spindles are on average ~45 microns long, and ~30 microns wide. Microtubules in these spindles have an average length of ~7 microns (ref) and are at a density of ~50-100 microtubules/μ\mum^2, implying that there are ~100,000 (ref 1, ref 2). Microtubules are polar polymers whose minus ends are relatively static and whose plus ends polymerize at a speed of ~10-20 μ\mum/min (ref). There is no appreciable rate of rescues in these spindles ? (ref), and the half-life of these microtubules is ~16s, much shorter than the typical lifetime of a spindle – which can exist for several hours. Microtubules in the spindle interact with each other via motrs and cross-linkers, and continuously slide toward the poles at a rate of ~2.5 μ\mum/min (ref 1, ref 2)

Nucleation and Transport Organize Microtubules in Metaphase Spindles

Microtubular origin of mitotic spindle form birefringence. Demonstration of the applicability of Wiener's equation.

Spindle Assembly in Xenopus Egg Extracts: Respective Roles of Centrosomes and Microtubule Self-Organization

Microtubule Plus-End Dynamics in Xenopus Egg Extract Spindles

Fast Microtubule Dynamics in Meiotic Spindles Measured by Single Molecule Imaging: Evidence That the Spindle Environment Does Not Stabilize Microtubules

The kinesin Eg5 drives poleward microtubule flux in Xenopus laevis egg extract spindles. Although mitotic and meiotic spindles maintain a steady-state length during metaphase, their antiparallel microtubules slide toward spindle poles at a constant rate. This "poleward flux" of microtubules occurs in many organisms and may provide part of the force for chromosome segregation. [...] Our results suggest that ensembles of nonprocessive Eg5 motors drive flux in metaphase Xenopus extract spindles.

Spindle self-organization

guillefix 3rd June 2016 at 12:14am

See Active matter

Physical basis of spindle self-organization

Spindle

Theory

Spindle self-organization arises from:

  • the local interactions of microtubules, mediated by steric effects, cross-linkers and motors
  • microtubule polymerization dynamics (Microtubule turnover)

Microtubules in the spindle are deep within the nematic phase, as their volume fraction, 0.03\sim 0.03, is well above the volume fraction at which the isotropic phase is expected to lose stability, 0.01\sim0.01. However, their net polarity varies from parallel (with plus end towards center) at the ends, to antiparallel at the middle. Theory: The magnitude of the nematic field is taken to be constant throughout the spindle (note: the magnitude, not the direction!), while the magnitude of the polarity field depends on motor activity and self-advection. They do this because they consider the simplest theory that is consistent with all the data.

See Supporting infomation (annotated)

Theory based on that developed in this paper: Fluctuating hydrodynamics and microrheology of a dilute suspension of swimming bacteria. Some parts can be derived using Poisson-bracket approach to the dynamics of nematic liquid crystals.

How changes in volume due to microtubule polymerization (gaining the dimers) can also add to active stress, as in the case of cells growing in tissues: Fluidization of tissues by cell division and apoptosis

Experimental validation

Materials and apparatus

LC-PolScope, http://openpolscope.org/. Type of microscope that uses light polarization.

Metaphase arrested spindles assembled in Xenopus laevis egg extracts.

Measurement methods

LC-PolScope + Image processing -> extract spatio-temporal correlation functions from the movies obtained by microscope. Measure:

  • Retardance. Gives measure of microtubule density (if microtubules are well aligned, which they are). See video
  • Optical slow axis. Gives measure of microtubule orientation. See video

Spinning disk confocal microscope, to record 3D time-lapse movies of spindles labeled with high concentration of fluorescent tubulin. These give 3D measurements of the density. See video

Measuring stress fluctuations:

  • Passive two-point particle displacement mesaurement. See video
  • Active microrheology measurement of the frequency-dependent shear modulus of the spindle by Shimamoto et al.

obtained two-point particle displacements by tracking single molecules of fluorescently labeled tubulin, computed the two-point correlation between these single molecules along the direction perpendicular to the spindle axis.

Internal dynamics of spindle

http://www.pnas.org/content/111/52/18496/F1.expansion.html

Measuring correlations. In particular, they measure correlations of the fluctuations at each pixel in the image relative to the time-average value of that pixel. This is so that the correlations don't contain information on the more or less steady average spatial structure of the spindle, and so we focus on the fluctuations on top of it. The Fourier transform of an autocorrelation gives the Power spectral density (PSD), which they use to compare predictions with experiment. They also use these comparisons to fit the parameters of the theory, as is done in many instances in Condensed matter physics, as they point out. They also show that their parameters are relatively few, showing strong predictive power of the theory, and also meaning that the agreement with experiment is strong validation of the theory.

Measurement results:

  • microtubule orientation autocorrelation (AC) function.
    • Fourier transform of equal-time spatial AC: 1/q21/q^2, where qq is the wave number. Why don't we look at ω0\omega \rightarrow 0 (i.e. time average of signal) in analogy to what we do below?
    • Fourer transform of time autocorrelation for the q0q \rightarrow 0 component (i.e. average over space of fluctuation): 1/ω21/\omega^2, where ω\omega is the frequency (Fourier variable).
    • Both of these correspond to linear decay in the real space (space, or time). See comment below
    • They are not compatible with other competing theories Why?.
  • density autocorrelation function
    • Fourier transform of equal-time autocorrelation function along direction perpendicular to the spindle axis (wavenumber along this direction is qq_\perp):
      • plateaus for small qq_\perp
      • decays as 1/q41/q_\perp^4 for large qq_\perp.
    • Fourier transform of long-wavelength limit of time autocorrelation function goes like 1/ω21/\omega^2 too.
  • orientation-density cross-correlation function
  • the generation and propagation of stress in the spindle.
    • The two-point displacement correlation function decays as the inverse of the particle separation, RR.
    • The two-point displacements exhibit super-diffusive motion with an exponent α1.8\alpha\approx 1.8. When combined with the active microrheology measurements, reveals that stress fluctuations in the spindle increase linearly with time lag.

These are all are consistent with the theory, as can be seen in the figure below:

http://www.pnas.org/content/111/52/18496/F2.expansion.html

Morphology of the spindle

The calculated orientation of microtubules throughout the spindle quantitatively agrees with their LC-Polscope measurements.

They reproduced the observed spatial variation of polarity

Calculated aspect ratio closely agrees with observation

http://www.pnas.org/content/111/52/18496/F3.expansion.html


Other spindle phenomenology to further investigate using the above theory:

  • Fusion of two spindles
  • Response of the spindle to physical perturbations
  • Molecular perturbations, which should act to change the parameters of the theory

Nonequilibrium mechanics of active cytoskeletal networks.

Microrheology, Stress Fluctuations, and Active Behavior of Living Cells. We report:

  • the first measurements of {the [intrinsic strain fluctuations] of {living cells}} using {a recently developed tracer correlation technique}
  • along_with a theoretical framework for {interpreting [such data] in {heterogeneous media with nonthermal driving}}.

The {[fluctuations]’ spatial and temporal correlations} indicate that {the cytoskeleton can be treated as a {course-grained continuum with power-law rheology, driven by a spatially random stress tensor field}}.

{Combined with recent cell rheology results, our data} imply that {{intracellular stress fluctuations have a nearly 1/ω21/\omega^2 power spectrum}, as expected for a continuum with a slowly evolving internal prestress.}

A 1/ω21/\omega^2 spectrum corresponds to a linear decay in time of a stress-stress correlation function (see WA computation, notice dividing by ω\omega is like integrating the Fourier transform) within our experimental time window, and would be a natural consequence of slow evolution of intracellular stress. Explanation: The stress generation/relaxation may rely on a number of modes with diverse timescales, τi\tau_i. In the simplest case, a stress autocorrelation would then be multiexponential, consistent with our result if all τi\tau_i lie well outside of our measurable range. This is because the exponentials appear linear when the exponent t/τi1t/\tau_i \ll 1.

High-resolution probing of cellular force transmission.

Sponge (tool)

guillefix 8th July 2016 at 3:16am

Sport

guillefix 17th May 2016 at 1:36am

Standing

guillefix 5th July 2016 at 3:58am

Standing, also referred to as orthostasis, is a human position in which the body is held in an upright ("orthostatic") position and supported only by the feet.

Star

guillefix 5th July 2016 at 3:28am

Static features of spin glasses

guillefix 12th July 2016 at 4:07pm

Four properties constitute the most prominent static features of materials we have come to call Spin glasses.

  • a cusp in the magnetic susceptibility,
  • a rounded maximum but no discontinuities in the specific heat,
  • spin freezing below temperature TfT_f , and
  • an absence of spatial long-range order

Dilute magnetic alloys at higher concentrations of magnetic impurities were the first experimental examples of spin glasses. Because the spins interact, it was expected the system would have some sort of ordered phase at low temperatures. Indeed a Phase transition was observed, with a susceptibility cusp at a particular transition temperature TfT_f. The high temperature phase was a paramagnetic phase. Then experiments on the nature of the lower temperature phase were conducted.

There exists a variety of experimental probes that can provide information on what the atomic magnetic moments are doing, and measurements using these probes indicated several things.

  • First, the spins were “frozen”; that is, unlike in the high-temperature paramagnetic phase, in which each spin flips and gyrates constantly so that its time-averaged magnetic moment is zero, at low temperatures each spin is more or less stuck in one orientation.
  • Second, the overall magnetization was zero, ruling out a ferromagnetic phase. But third, more sensitive probes indicated there was no long-range antiferromagnetic order either: in fact, as near as could be told, the spins seemed to be frozen in random orientations.

However, the phase transition had some more surprises to reveal. Recall that at a phase transition, all the thermodynamic functions behave singularly in one fashion or another. Surely the specific heat, one of the simplest such functions, should show a singularity as well. However, when one measures the specific heat of a typical spin glass, one sees . . . absolutely nothing interesting at all. All you see is a broad, smooth, rounded maximum, which doesn’t even occur at the transition temperature (defined to be where the susceptibility peak occurs). A typical such measurement is shown in figure 4.2.

So, returning to the topic at hand, we’re faced with the follow- ing question: Is there a true thermodynamic phase transition to a low-temperature spin glass phase characterized by a new kind of magnetic ordering? Or is the spin glass just a kind of magnetic analog to an ordinary structural glass, where there is no real phase transition and the system simply falls out of equilibrium because its intrinsic relaxational timescales exceed our human observational timescales? If the latter, then the spins wouldn’t really be frozen for eternity; they would just have slowed down sufficiently that they appear frozen on any timescale that we can imagine.

As of this writing, the question remains open.

Stationary_solution_to_FP_eq.png

guillefix 21st January 2016 at 5:28pm

Statistical field theory

guillefix 15th February 2016 at 9:39pm

A statistical field is often derived by averaging microscopic physics over mesoscopic lengthscales (in a particular way called, coarse graining). This results in a free enegy, FF, which (when exponentiated) gives the weight factor over which we integrate to get the partition function, ZZ. As the averaging gives a (macroscopic) field (as an approximation to a lattice average), the integral for ZZ is a Functional Integral, expressable as a Path Integral.

Origin of Universality (Why do many field theories look like each other?)

This free energy can be written as a power series in the field. It turns out that only a few terms (the renormalizable ones, and maybe a few non-renormalizable ones) contribute for a given precission of interest (this is understood via the Renormalization Group).

Thus, the only thing that fundamentally differentiates one theory/model from another are the symmetries of the field, which determine which terms can appear in the free energy. Dimensionality and transformation properties of the field (whether it is a scalar, a vector, a spinor, ...) also play a role.

The microphysics only enters through the parameters of the theory. But as these are often few, they can be and most often are determined experimentally. For this reason statistical field theories are often referred to as phenomenological.

Similar considerations apply in Quantum field theory

The Landau-Ginzbyurg Hamiltonian

Assumptions:

  • Locality and uniformity
  • Isotropy
  • Stability

Statistical inference

guillefix 9th July 2016 at 3:12am

Statistical physics

guillefix 13th July 2016 at 3:41pm

Statistical physics deals with the description of systems for which a deterministic description is either useless or impossible, so that one uses a statistical description.

Here a deterministic description is understood in the context of the relevant physical description. For example Schrodinger's equation is deterministic, if the relevant physical description is the wavefunction. It is non-deterministic if one takes position and/or velocity as the relevant physical descriptions. However, it is known that one can't describe quantum mechanical evolution purely with a statistical theory of position and velocity, without sacrificing some rather well-established physical principles or predictions.

If the system is effectively classical (either because it is macroscopic, or for some other reason, that is probably ultimately related to Quantum decoherence), the need for a statistical description arises when the system is sufficiently chaotic. Most often this requires the system to: have many components and/or be coupled to a system with many components.

For this reason, statistical physics is mostly applied to the description of systems of many particles in a gas, liquid or solid; or to one or a few particles coupled to one such large system.

There are two main branches of statistical physics:

Equilibrium statistical physics deals with such systems at equilibrium, that is, when the relevant macroscopic averages of the statistical description don't change with time. In practice, one often has two approaches:

  • For a small system coupled to a large chaotic system, one often has to use a probability distribution function over the relevant degrees of freedom (amazingly, for equilibrium, this always takes the form of a Boltzmann distribution.
  • For a large system, one can often bypass the distribution function and deal with the relevant averaged quantities directly, resulting in a thermodynamic description.

Non-equilibrium statistical physics deals with such a system out of equilibrium, so that averages can change in time. This is much harder to do in full generality, as systems offer much more diversity out of equilibrium, as may be expected. One often has three approaches:

  • For a small system coupled to a large chaotic system, one has a stochastic process, which describes the evolution under the random influence of the large chaotic system.
  • For a large system which is only slightly out of equilibrium, so that relevant macroscopic averages analogous to those used in thermodynamics can still be defined, one can describe the system using Non-equilibrium thermodynamics
  • For a large system that is considerably out of equilibrium, one has to use the tools of Kinetic theory to describe it. However, if the system is very far from equilibrium, even these may be inappropriate, and finding an appropriate description may be extremely hard. An example of this are systems with strong Turbulence. Our only approaches to understand these systems are often phenomenological.

See also Complex systems, and Sloppy systems

Entropy, Order Parameters, and Complexity

Long-range interacting systems

Oxford physics course

Oxford maths course

Bangalore School on Statistical Physics - V (video lectures)

Bangalore School on Statistical Physics - VI (I'm on the 1st lecture on Long-range interacting systems

Ergodic theory

See about disordered systems in Condensed matter physics, as these are interesting systems studied using statistical physics.

Indian Statistical Physics Community Meeting 2016

Interesting papers on statistical physics and complex systems

PRE- More Kaleidoscopes for April 2016

Non‐equilibrium thermodynamics: foundations, scope, and extension to the meso‐scale

Non-equilibrium thermodynamics - de Groot and Mazur

Statistical Mechanics II course

Sethan's Statistical Mechanics: Entropy, Order Parameters, and Complexity

MIT 8.333 Statistical Mechanics I

MIT 8.334 Statistical Mechanics II

http://stp.clarku.edu/notes/

Statistical physics, Optimization, Inference and Message-Passing algorithms


Foundations of statistical mechanics

What Is a Macrostate? Subjective Observations and Objective Dynamics

The Backwards Arrow of Time of the Coherently Bayesian Statistical Mechanic

Ludwig Bltzmann and entropy Lots os stuff about entropy..


Philosophy of statistical physics

Probability in physics: stochastic, statistical, quantum

Rethinking equlibrium

Book: Ensemble modeling : inference from small-scale properties to large-scale systems

Statistics

guillefix 15th July 2016 at 9:42pm

Stellar astronomy

guillefix 5th July 2016 at 3:28am

Stochastic dynamics of self-propelled colloids

guillefix 11th June 2016 at 1:21am

Self-Motile Colloidal Particles: From Directed Propulsion to Random Walk (experiment)

Anomalous Diffusion of Symmetric and Asymmetric Active Colloids

At times long compared to the rotational diffusion time, rotational diffusion leads to a randomization of the direction of propulsion, and the particle undergoes a random walk whose step length is the product of the propelled velocity V and the rotational diffusion time, leading to a substantial enhancement of the effective diffusion coefficient

Stochastic geometry

guillefix 15th June 2016 at 4:47pm

Stochastic processes

guillefix 27th June 2016 at 10:55pm

Links

Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes)

Nice lecture notes

Discrete Stochastic processes MIT course

Stochastic processes MIT notes

Nice notes on applications of stochastic processes

Wikipedia: Stochastic process

List of stochastic processes topics

Watch: Physics - Physical Applications of Stochastic Processes by Prof. V. Balakrishnan


Stochastic processes

Probability theory

Martingales, Martingales Through Measure Theory

Examples

Classification of models

Descriptions

All these generally are Markov processes

  • Continuous space-time
  • Discrete space
    • Probability description \rightarrow Master equation
      • Discrete time \rightarrow Difference equation. Discrete time master equation.
      • Continuous time \rightarrow Differential Continuous time master equation.
  • Continuous space-discrete time ??. An example is the beginning of the derivation for Brownian motion by Einstein

Important results

  • Dissipation-fluctuation relation. Friction and dissipation are due to the random movements of particles. Fluctuations are too. The coefficients describing them (diffusion coefficient and viscosity) should be related.

Computational methods

Monte Carlo method

Other mathematical aspects

https://en.wikipedia.org/wiki/It%C3%B4_calculus

Applications

Chemistry

Chemical kinetics

Oscillating chemical reactions

Biology

Enzyme kinetics

General phenomena

Number fluctuations

Others

Telegraph noise

Complex systems


Recent paper by Ramin Golestanian (26th Feb 2016): http://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.5b04372 on power spectrum for electric-field-driven ion transport through nanopores. Apparently Pink noise (noise that has power law power spectrum, instead of flat, as for white), is common place in situations with electric fields, and underlying mechanism not totally understood.

https://en.wikipedia.org/wiki/Point_process

Stochastic processes with JS: https://www.npmjs.com/package/stochastic

String (Computer science)

guillefix 14th July 2016 at 2:53pm

A string, in Computer science, Information theory, and Mathematics, is a Sequence of symbols, where each symbol is a member of a given set, called the alphabet. Strings often refer to finite sequences. See here.

These constructions are useful in Mathematics and Computer science.

Strings in computer science

In computer science, strings are one of the fundamental Data types used in Programming. In this case, the symbols are called characters.

However, a string can also be considered as a Data structure

String theory

guillefix 24th June 2016 at 1:32am

Structural analysis

guillefix 23rd May 2016 at 11:20pm

Sun

guillefix 5th July 2016 at 3:32am

The Star that gravitationally bounds together the Solar System

https://en.wikipedia.org/wiki/Sun

Supervised learning

guillefix 12th July 2016 at 12:32am

Training data consisting on inputs and outputs. Other names for inputs: predictors, independent variables, features. Other names for outputs: responses, dependent variables.

In supervised learning, we want to find function relating inputs to outputs, to then be able to predict new outputs from new inputs. Need a way to represent the function approximation, with some parameters (the model). Some example of models:

and a learning algorithm to find best parameters for the data, so that the model can predict well. See Learning theory.

New paradigm: Deep learning

Generative vs discriminative models

Discriminative learning

Learning the function p(outputinput)p(\text{output}|\text{input}). See notes

Regression

Output value is continuous, and quantiative (i.e. it has an ordering, and a notion of closeness (matrix)).

Classification

Output value is discrete, or categorical, or qualitative. No implicit ordering, or closeness on the variables. Simple approach: Logistic regression

General methods

Artificial neural network (see Deep learning)

Support vector machine

Generative learning

Learning the function p(inputoutput)p(\text{input}|\text{output}), which can be used to find p(outputinput)p(\text{output}|\text{input}) using Baye's theorem. See notes

Gaussian discriminant analysis

Naive Bayes

Model assessment

Variance. How much the model varies with fluctuations of the training data, i.e. how stable is it.

Bias. How many assumptions the model imposes, i.e. how flexible is it. Well that's maybe only one way to look at it..

See explanation here

Cross-validation

Test the model on data you haven't used for training.

min-max, average

https://www.cs.cmu.edu/~schneide/tut5/node42.html

Wikipedia has good explanations: https://en.wikipedia.org/wiki/Cross-validation_(statistics)

One can show (maybe technical details I don't know..) that given the real distribution of the data, and a sample used for training, one is likely to underestimate the error. So I think cross-validation can be shown rigorously to be good for assessing a model's predictive power (i.e. probability of predicting rightly). See Elements of Statistical Learning book for all details..

It is a way to find out if you are overfitting

Related: https://en.wikipedia.org/wiki/Testing_hypotheses_suggested_by_the_data

Support vector machine

guillefix 9th July 2016 at 4:42am

Surface science

guillefix 2nd July 2016 at 5:44pm

Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics.

See Materials science, Condensed matter physics, Chemistry

https://en.wikipedia.org/wiki/Surface_science

Surface chemistry

Surface physics

Fluid dynamics at interfaces

See Colloid Transport by Interfacial Forces

Interfacial forces

Fluid/fluid interfaces

Governed mostly by (apparent) discontinuities in stress, particularly surface tension. These are known as "Marangoni effects", or "capillary-driven flow".

Solid/fluid interfaces

Governed mostly by slip velocity at the interface.

These are responsible for several of the Phoretic mechanisms of colloids, which cause them to move along gradients of some quantity.

Surface tension

Pervaporation

Surface tension

guillefix 14th June 2016 at 7:13pm

See LectureNotes notes (preparing for physsoc class)

Hydrophobicity

Surfactant

guillefix 11th May 2016 at 12:07pm

Survival of the flattest

guillefix 26th April 2016 at 7:16pm

An effect, where effectively large neutral spaces are also favoured, but in equilibrium, not out of equilibrium as in the Arrival of the frequent

See comments on Arrival of the frequent, for more comparisons.

Original paper: Evolution of digital organisms at high mutation rates leads to survival of the flattest

Suspension

guillefix 9th May 2016 at 10:01pm

A suspension is a dispersion of solid particles in a liquid (IUPAC definition). For the particles to be definable as solid, they must have at least some size, and thus a suspension requires particles of colloidal size, or larger.

Some authors, use suspension to refers to those suspensions where the particles are large enough to sediment. The case for smaller particles (like colloidal particles) may then be called Sol (colloid).

Swarm robotics

guillefix 9th June 2016 at 4:59pm

Symbolic dynamics

guillefix 14th July 2016 at 3:49am

Symbolic method for unlabelled structures

guillefix 28th June 2016 at 5:15pm

The symbolic method of Analytic combinatorics, applied to unlabelled structures. It uses the ordinary generating function.

See here for slides. video.

Elementary identity: A(z)=NN0ANzNA(z) = \sum_N{N\geq 0} A_N z^N, where ANA_N is the number of objects of size NN

Trees and Catalan numbers

lecture

The number of rooted ordered trees of nn nodes is the nnth Catalan number. Can derive GF by using the fact that "a tree is a node and a sequence of trees". See here.

Can easily extend to binary trees, as done in video

Trees have been related to other combinatorial structures: gambler's ruin sequences, context-free languages, triangulations, ...

Strings

lecture

Powersets and Multisets

Powersets and Multisets

Symmetric property

guillefix 14th July 2016 at 1:05am

The symmetric property, or just symmetry, in Set theory, is a property of a binary Relation on a Set XX:

for all x,yXx, y \in X, xRyxRy implies yRxyRx

Symmetry

guillefix 13th July 2016 at 9:01pm

Symmetry breaking

guillefix 28th January 2016 at 2:29am

Discrete symmetry breaking

Continuous symmetry breaking. Goldstone theorem

–I think: A matter of time-scales??

Synthetic biology

guillefix 25th April 2016 at 11:44pm

(Lecture Notes in Artificial Intelligence volume 5777) Kampis, Karsai, Szathmáry-Advances in Artificial Life_ Darwin Meets von Neumann, Part 1(2011)

Xenobiology


Membrane properties.

Protein pores: transport polymers across membranes. Often they have to unfold and fold.

Stocastic sensing.

SIgle-molecule chemistry

Protein engineering, chemical synthesis, biophysical methods

alpha-Hemolysin protein pore.

How can a water souble protein assemble into a transmembrane pore?

3D droplet networks are tissue-like materials. aqueous droplets networks.

Synthetic biology Woolfosn Bromley 2011. Nice diagram

Synthia

...completelly synthetic cells.

Protein components for nanodevices, Bayey, et al.

Droplets water in an oil form monolayer, but two of tese wil tend to come togehter and forma bilayer. There is a force that attracts them. Probably kinetically stable.

Lipid coated hydrogels as components...

The 7R "diode"

Folding dropplet networks using osmolarity, different salt concentrations in eac.

Soft robots. Light for sensing, power generation and patterning.. Bacteriorhodopsin:light-driven proton pump.


https://autodeskresearch.com/groups/bionano

http://www.synthace.com/

http://ginkgobioworks.com/

http://www.nanalyze.com/2016/03/3-companies-building-nanorobot-factories/

Systems biology

guillefix 1st July 2016 at 2:05am

Systems Biology DPhil

guillefix 14th July 2016 at 5:05pm

http://www.sysbiodtc.ox.ac.uk/

Application for admission as a graduate student to the University of Oxford

Your offer and contract

Academic conditions

Achieve the EPSRC minimum of a 2:1 classification in your current programme of study and provide a hard - copy original or certified copy of your final transcript Once you have met the condition above, please inform us as soon as possible by sending the relevant official documentation to the address a bove. We need to receive this information by 31 August 2016

As I will have the Degree ceremony (MMathPhys) on September, the proof I need to send is a Degree confirmation letter (see here)

A place for the EPSRC Systems Biology Doctoral Training Centre beginning 3 October 2016

Completion of Conditions letter

If you satisfy all the conditions set by both the department and the college in their offer letters, you will be sent a final letter by your department confirming your place. I expect during summer.

Supervisors (for 1st year)

Jonathan Cooper

Joe Pitt-Francis

See MMathPhys oral presentation, and GKeep for topic ideas.

For DPhil period. Potential supervisor is Ard Louis.

EPSRC studenship

This offer includes full funding in the form of a prestigious EPSRC Studentship which covers both University and College fees, and also includes a stipend award to cover living expenses (£14,057 per annum at current rates). This funding covers the entire four year duration of the programme.

More details about how I will receive the scholarship? When, how. See email: You do not need to worry about the studentship - we pay the students on behalf of the EPSRC. You will receive an email from the Finance Officer before you arrive asking for bank details, but the first payment will be given to you as a cheque on your first day here.

Kellogg college

College Offer Letter

Kellogg College Essential Information for Offer Holders 2016-17

you will need to be in College in time to attend induction events for graduate students, which are expected to begin on 3 October 2016. A wider range of welcome events will run from 23 September 2016. Your departmental induction programme may start on a slightly different date, so you will need to arrange to be in Oxford in time for whichever begins first.

Research interests

Systems biology

Non-equilibrium statistical physics

Nanotechnology and Artificial intelligence, as well as ways of combining them (see section in nanotech tiddler)

Systems engineering

guillefix 8th July 2016 at 1:35am

Systems science

guillefix 8th July 2016 at 2:41am

Interdisciplinary sciences

Mostly about systems, synthesizing, going beyond the reductionism and analysis of the basic foundational sciences.

List of systems science journals

Cybernetics

Principia Cybernetica Electronic Library

http://www.emeraldinsight.com/loi/k

Systems theory

Philosophy

http://www.vub.ac.be/CLEA/dissemination/groups-archive/vzw_worldviews/


https://en.wikipedia.org/wiki/Systems_science

Systems theory

guillefix 8th July 2016 at 3:07am

Table (furniture)

guillefix 5th July 2016 at 4:05am

A table is an item of furniture that facilitates a surface at a height appropriate for manipulating objects, while Standing, or Sitting

Taxis

guillefix 9th June 2016 at 6:51pm

Taxis refers to a behavioural response by an organism to a directional stimulus or gradient of stimulus intensity.

See Phoretic mechanisms of self-propelled colloids for similar mechanisms in simpler active colloid systems.

Taxonomy

guillefix 8th July 2016 at 6:50pm

Taxonomy: Life's Filing System - Crash Course Biology #19

Tree of life

Homologous traits

Binomial nomenclature

Taxa

groups of organisms

Domain

Kingdom

Phylum

Class

Order

Family

Genus

Species


https://www.wikiwand.com/en/Taxonomy_(biology)

Taylor series

guillefix 25th June 2016 at 3:17pm

Technology & Engineering

guillefix 17th May 2016 at 1:42am

Technologies are pieces of Art with very clear purpose, and thus must use the more rigorous methods of Science. The purposes of technologies is often to extend what we can do.

Engineering is the art and science of making new technology.

Portal:Technology

Portal:Contents/Technology and applied sciences

Wikipedia:Portal/Directory/Technology and invention

Technology & innovation

guillefix 7th May 2016 at 1:31am

Telecommunication

guillefix 7th May 2016 at 1:43am

Telecommunications engineering and technology

Temporal networks

guillefix 16th June 2016 at 8:20pm

test tiddler

guillefix 21st January 2016 at 1:20pm

test content

Textile

guillefix 21st July 2016 at 12:54am

A textile, or cloth, is a kind of flexible Composite material material consisting of a network of natural or artificial fibres (yarn or thread).

https://www.wikiwand.com/en/Textile

Textile manufacturing

Textile art

Textile art

guillefix 21st July 2016 at 12:54am

Textile art tools

guillefix 21st July 2016 at 12:53am

Textile manufacturing

guillefix 21st July 2016 at 12:52am

The Horn of Alexander the Great

guillefix 12th June 2016 at 4:25pm

http://www.jstor.org/stable/25221013?seq=1#page_scan_tab_contents

The treatise of Walter de Milimete

See El mundo fisico, de Guillemin, Phonurgia nova, de Athanasius Kircher, Secreta secretorum, etc.

https://web.stanford.edu/group/kircher/cgi-bin/site/?attachment_id=679

"Know thou, moreover, that the people aforetime have produced things which the contemporary men of knowledge have been unable to produce. We recall unto thee Murtús* who was one of the learned. He invented an apparatus which transmitted sound over a distance of sixty miles."

http://bahaiasheboro.blogspot.co.uk/2010/05/know-thou-moreover-that-people.html

The structure of the genotype-phenotype map strongly constraints the evolution of non-coding RNA

guillefix 23rd April 2016 at 12:28am

See MMathPhys oral presentation

The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA

Non-coding RNA (ncRNA) is RNA whose function is not to encode information. It's function may then be structural, or catalytic for instance, and is most often determined by its secondary structure, which is then the phenotype of interest.

The distribution of properties found in ncRNA in nature (from fRNAdb database) closely follows that obtained by G-sampling (uniform sampling over genotypes). Due to the bias in the GP map, this sampling is very different from P-sampling (uniform sampling over phenotypes). The strong bias makes certain structures appear much more often, which has been called convergent evolution in Evolution (part of the general phenomenon of homoplasy). An example is the ubiquity of the hammerhead ribozyme through all the kingdoms of life.

Figure 2. Comparison of P-sampled and G-sampled distributions to natural data for L = 20 RNA. The P-sampled PP(Ω) (red diamonds) measures the probability distribution for a phenotype to have a given NS size Ω. It differs markedly from G-sampled PG(Ω) (blue circles), generated by random sampling over genotypes. Error bars arise from binning data. The black and cyan lines are theoretical approximations to PP(Ω) and PG(Ω), respectively (see Methods). The probability distribution of Ω for the SSs all 7327 (non-trivial) L = 20 sequences for Drosophila melanogaster from the fRNAdb database [21] (green squares) is much closer to the G-sampled PG(Ω) than to the P-sampled PP(Ω). Inset: all 11 218 SS phenotypes (purple triangles) ranked by NS size Ω. There is strong bias, just 5% of phenotypes take up 58% of all genotypes. The 7327 natural data points (green squares) are clustered at lower rank (larger Ω). (Online version in colour.)

The number of 'relevant structures' can be estimated by the entropy HH of the G-sampled distribution of features (for instance belonging to a certain binned interval of neutral space size, or number of stacks (sets of contiguous base-pairs)), as 2H2^H. One can define the bias ratio as the ratio of 2H2^H to the total number of phenotypes.

Within these relevant structures which arrive during evolution, natural selection still acts, and can be seen for example in the higher stability of natural RNAs vs random G-sampled RNAs. We find that the natural RNAs have slightly more bonds than in G-sampled structures. The bias towards larger Ω\Omega also leads to structures with larger mutational robustness (see Robustness and Evolvability in Living Systems and From sequences to shapes and back: a case study in RNA secondary structures). Larger robustness is considered to be advantageous [6], so that, in this important way, phenotype bias facilitates evolution. The high robustness, however is found both in G and P-sampling because of the high genetic correlations (genes tend to be close in the mutational network to other genes that produce the same phenotype). The genetic correlations are high enough to produce giant connected components (see Natural Selection and the Concept of a Protein Space).

"Bias means that it will be difficult for evolution to find L ¼ 55 structures with a large number of stacks, again raising the question of what kind of functionality is possible in principle that cannot be reached by evolution because of such phenotype bias constraints?"

Understanding tip: The line in figure 4 is flat when there are a lot of phenotypes because there are a lot of phenotypes with the same Ω\Omega, and the phenotypes are equally spaced in xx axis in rank plot.

The results that G-sampling produce the same results as the database indicate that some property similar to ergodicity may be at play. G-sampling is an ensemble average, and the database shows a kind of time-average over evolutionary trajectories. However, the process cannot be totally ergodic because evolution is a nonequilibrium process, and effects like long waiting times and the Arrival of the frequent are examples of non-ergodic non-equilibrium effects.

The GP map bias is an example of biases in development or other internal processes could strongly affect evolutionary outcomes. These have been controversial; however, RNA SS provides perhaps the clearest and most unambiguous evidence for the importance of bias in shaping evolutionary outcomes.

See Homoplasy for discussion on the relation to convergent and parallel evolution. Our ability to make detailed predictions about evolutionary outcomes as well as counterfactuals for RNA may also shed light on Mayr’s famous distinction between proximate and ultimate causes in biology (See Cause and effect in biology and Proximate and ultimate causation). Not sure about this, or if I understand it..

The GP mapping constraint has some resemblance to classical morphogenetic constraints which also bias the arrival of variation [47]. But it also differs, because the latter are conceptualized at the level of phenotypes and developmental processes, and may have been shaped by prior selection, whereas the former constraint is a fundamental property of the mapping from genotypes to phenotypes and was not selected for (except perhaps at the origin of life itself Still, maybe most possible GP maps have this property anyways (see experiments with transducers)).

Finally, strong phenotype bias is also found in:

suggesting that some of the results discussed in this paper for RNA may hold more widely in biology

See also Evolving automata

Paper with several examples of GP maps, including cellular automata map: An investigation of redundant genotype-phenotype mappings and their role in evolutionary search

It would be interesting to devise artificial methods to search for such undiscovered ribozymes (those that are very improbable for evolution to find), some of which may be more fit than those that Nature has found.

For this see:

Exploring the repertoire of RNA secondary motifs using graph theory; implications for RNA design. tree graphs to describe RNA tree motifs and more general (dual) graphs to describe both RNA tree and pseudoknot motifs. our graph theory approach to RNA structures has implications for RNA genomics, structure analysis and design.

Experimental fitness landscapes to understand the molecular evolution of RNA-based life In evolutionary biology, the relationship between genotype and Darwinian fitness is known as a fitness landscape. These landscapes underlie natural selection, so understanding them would greatly improve quantitative prediction of evolutionary outcomes, guiding the development of synthetic living systems. However, the structure of fitness landscapes is essentially unknown. Our ability to experimentally probe these landscapes is physically limited by the number of different sequences that can be identified. This number has increased dramatically in the last several years, leading to qualitatively new investigations. Several approaches to illuminate fitness landscapes are possible, ranging from tight focus on a single peak to random speckling or even comprehensive coverage of an entire landscape. We discuss recent experimental studies of fitness landscapes, with a special focus on functional RNA, an important system for both synthetic cells and the origin of life.


Methods

the_imortalist.jpg

guillefix 21st January 2016 at 7:57pm

Theology

guillefix 8th July 2016 at 1:50am

Theoretical computer science

guillefix 21st June 2016 at 3:31pm

Computer science is what came out of asking: what kind of maths can actually be effectively carried out in the physical world? Theoretical computer science, looks at the more theoretical (as opposed to applied) aspects of this question.

The nature of computation by Moore and Mertens (looks like a nice book). Good reads page and Amazon page

Oxford course 1st year

Computational learning theory

See Discrete mathematics


Structure and interpretation of computer programs Companion site

Functional programming

Oxford course - func prog

http://learnyouahaskell.com/introduction

https://www.haskell.org/

Higher-order functions. Composition.

Examples in JS: .filter, map, reduce


Theory of computation

Church-Turing thesis

https://www.youtube.com/watch?v=2jz0ugqghys

http://research.cs.queensu.ca/home/akl/cisc879/papers/PAPERS_FROM_MINDS_AND_MACHINES/VOLUME_13_NO_1/V23L84X656370574.pdf This gets quite philosophical of course

https://www.youtube.com/watch?v=92WHN-pAFCs


People, Problems, and Proofs

Emergence, Complexity and Computation

Philosophy of computer science

Theory of computation

guillefix 24th June 2016 at 3:04am

Computation is the part of maths that can effectively be carried out in the world

Computation is often studied via mechanistic models like those formalized in Automata theory. The main models will be explained below, in Models of Computation section.

Language:

A formal language is a set of strings of symbols that may be constrained by rules that are specific to it. These rules can also be expressed as machines, like finite state machines or Turing machines.

Finite state machine<Context-free languages<Turing machines<Undecidable problems (hypercomputation)

See Chomsky hierarchy in Formal systems and semantics

https://www.youtube.com/watch?v=ZNBNmxXKmUY&index=7&list=PL601FC994BDD963E4. On Lect 3 part 2/10

Computability theory

Computability of functions

Models of computation

See also Automata theory for more.

Finite-state machine


Theory of Computation - Fall 2011 (Course)

Theory of Computation

Theory of Automata, Formal Languages and Computation lect 1

Computability and recursion

AIT lectures

Introduction to computability theory

Unconventional computing


Causal Nets or What Is a Deterministic Computation?

http://www.cs.bu.edu/~gacs/recent-publ.html

Theory of phoretic mechanisms of self-propelled colloids

guillefix 17th June 2016 at 12:54am

See Clusters, asters, and collective oscillations in chemotactic colloids for more details. See also Phoretic mechanisms of self-propelled colloids, Collective behaviour of active colloids, Diffusiophoresis, and Designing phoretic micro- and nano-swimmers.

Use normal flux boundary conditions for the Diffusion of the concentration of product (pp) and substrate (ss), as done in Concentration around a self-diffusiophoretic particle.

Michaels-Menten reaction rate (see Enzyme kinetics).

Number conservation for the products and substrates, and the assumption that s and p diffuse rapidly compared to the colloid so that time dependencies and advection by flow [ 41 ] can be ignored give:

Dpp+Dss=DssbD_p p + D_s s = D_s s_b

where sbs_b is the background substrate profile. We thus need to solve for just one of the two concentration fields. This equation comes from the condition that, after reaching the stationary state (assumed fast, by molecules diffusing fast), the flux of products out should equal the net flux of substrate in, i.e. Dprp=Dsrs=αD_p \partial_r p = -D_s \partial_r s = \alpha (where α\alpha is the concentration, see here and here). Now integrate w.r.t. rr over the boundary layer (assumed to be very thin, of size δa\delta \ll a, aa the radius of the colloid) to get Dp(p(a+δ)p(a))=Ds(s(a+δ)s(a))D_p (p(a+\delta) - p(a)) = -D_s (s(a+\delta)-s(a)). Now the concentration of pp outside the boundary layer is assumed to be very small, while that of ss is fixed to sbs_b. We thus recover the above equation. Because the boundary is very thin ss and pp change approximately linearly within it, and the above equation can be interpreted as simply a "discretization" of the equation with derivatives, which actually holds just at the surface. Note that solution of diffusion equation at stationarity in 1D is linear, which helps justify this under the thin boundary approximation.

We work first in the linear regime which refers to the limit sbκ1/κ2=κMs_b \ll \kappa_1/\kappa_2 = \kappa_M. Here, κM\kappa_M is the Michaelis constant, and this regime corresponds to the case where the rate of catalysis is linearly proportional to the substrate concentration (see Enzyme kinetics). This regime is also called unsaturated. Later we look at the saturated regime. See Collective behaviour of active colloids

The resulting slip velocity (see Diffusiophoresis) of the fluid at the surface of the colloid (due to to the interaction of the surface with both substrate and products), leads, for spherical colloids, to an angular (ω\mathbf{\omega}) and linear (v\mathbf{v}) velocities:

ω=316πRr^×vslip(r)dΩ\mathbf{\omega} = -\frac{3}{16\pi R} \int \hat{\mathbf{r}} \times \mathbf{v}_{\text{slip}}(\mathbf{r}) d \Omega

v=14πvslip(r)dΩ\mathbf{v} = -\frac{1}{4\pi } \int \mathbf{v}_{\text{slip}}(\mathbf{r}) d \Omega

Again see Diffusiophoresis, these are derived from the reciprocal theorem. These can be expressed in terms of coefficients related to the spherical harmonic coefficients (we only include the first few) of the surface activity σ(θ,ϕ)\sigma(\theta, \phi), and motilities μp(θ,ϕ)\mu_p(\theta, \phi) and μs(θ,ϕ)\mu_s(\theta, \phi) (see Diffusiophoresis):

ω=Φ0(σ,μp,μs)n^×s\mathbf{\omega} = \Phi_0 (\sigma, \mu_p, \mu_s) \hat{\mathbf{n}} \times \nabla s

v=V0(s)n^α0sα1n^n^s\mathbf{v} = V_0 (s) \hat{\mathbf{n}} -\alpha_0 \nabla s -\alpha_1 \hat{\mathbf{n}} \hat{\mathbf{n}} \cdot \nabla s

The coefficients Φ0,α0\Phi_0, \alpha_0, etc. take into account the external substrate gradient directly, as well as the effects that the external substrate gradient has on the gradient of products produced by the particle.

Essentially, the different Phoretic mechanisms of self-propelled colloids correspond to responses in either ω\mathbf{\omega} or v\mathbf{v} to the external gradient, through different spherical harmonic components.

... if either σ\sigma or μp\mu_p contain all odd or all even harmonics there is no reorientation in response to the gradient (ω=0\omega = 0).

From calculations we find explicit examples of the general design tip: slip velocity is maximum when the position where μp\mu_p is maximum coincides with the region where pp changes most rapidly. To see more about design considerations see Designing phoretic micro- and nano-swimmers.

Thermodynamic equilibrium

guillefix 26th May 2016 at 11:45pm

https://en.wikipedia.org/wiki/Thermodynamic_equilibrium Thermodynamic equilibrium, no net currents (detailed balance)

Linear response theory, deals with near equilibrium systems, where averaged quantities either don't change, or change very slowly, I think. Currents may be non-zero in either case. Kubo formula. Read more here.

Thermodynamic potential

guillefix 2nd July 2016 at 3:23pm

Thermodynamics

guillefix 23rd May 2016 at 11:21pm

Thermodynamics...

See Statistical physics for now

Thermodynamics of liquid-liquid unmixing

guillefix 31st March 2016 at 7:27pm

When two liquids are miscible in all proportions at high temperature, but separate into two distinct phases when the temperature is lowered.

The Mean field theory for this situation is the regular solution model. This describes the thermodynamics (i.e. equilibrium properties) of the phase separation. The kinetics (i.e. non-equilibrium properties/dynamics) of phase separation are described here.

The important quantity is the volume fraction, ϕA,B\phi_{A,B}, proportional to the probability to find a particle of type A or B at a given point, which may in principle depend on space.

To begin with, we assume it doesn't depend on space, and we assume that the probabilities for neighbours are independent (mean field approximation).

A way to think about this more precisely is imagining all and each of the configurations for unlabelled particles (with finite volume) in a fluid. Now, assume all of these are equally probable, with probability 1/Ω1/\Omega. Now, for each of these spatial configurations of unlabelled particles, imagine all the possible ways of labelling the particles with A or B. In particular, we assume that for each of these configurations, the labelling of each of the particles is an independent random event, and for each particle there is probability ϕA\phi_A of labelling it A, and probability ϕB\phi_B of labelling it B. This doesn't fix the total numbers of A and B, but for large numbers it approximately does so, with errors of 1/N1/\sqrt{N}. Within this approximation we also have ϕi=Ni/N\phi_i=N_i/N (where NiN_i is the average number of species ii), so that we may call ϕ\phi a concentration.

We could do it fixing the number of particles of each species, but it's more cumbersome, and not really correct for the case where ϕ\phis vary in space (because when ϕ\phi varies in space, we don't assume the numbers are fixed, but only the chemical potentials, and thus the average numbers). If one fixes the number of the species, though, one can approach it as it's done in the derivation of the Flory-Huggins theory in Doi's polymer physics book (to see some notes on an extension to the continuous Gaussian chain, instead of the lattice model).

More importantly, these probabilities are not right because nearby particles are going to interact in our model, so there will be correlations in positions induced by the Boltzmann factors depending on the energies. This is where we make the mean field approximation. We ignore these correlations and assume the probability distributions at each site are independent!

By decomposing the possible states in this way we have for the entropy (AA is set of unlabelled arrangements):

S=plnp=ANA,NBN!NA!NB!(1/ΩϕBNBϕANA)ln(1/ΩϕBNBϕANA)S=-\sum p\ln{p} = -\sum_{A} \sum_{N_A, N_B} \frac{N!}{N_A!N_B!} (1/\Omega \phi_B^{N_B} \phi_A^{N_A}) \ln{(1/\Omega \phi_B^{N_B} \phi_A^{N_A})}

=lnΩANA,NBN!NA!NB!(1/ΩϕBNBϕANA)ln(ϕBNBϕANA)=-\ln{\Omega} -\sum_{A} \sum_{N_A, N_B}\frac{N!}{N_A!N_B!} (1/\Omega \phi_B^{N_B} \phi_A^{N_A}) \ln{(\phi_B^{N_B} \phi_A^{N_A})}

=lnΩNA,NBN!NA!NB!(ϕBNBϕANA)(NBlnϕB+NAlnϕA)=-\ln{\Omega} -\sum_{N_A, N_B}\frac{N!}{N_A!N_B!} (\phi_B^{N_B} \phi_A^{N_A}) (N_B\ln{\phi_B }+N_A\ln{\phi_A})

=lnΩNAN!NA!(NNA)!(ϕBNNAϕANA)((NNA)lnϕB+NAlnϕA)=-\ln{\Omega} -\sum_{N_A} \frac{N!}{N_A!(N-N_A)!}(\phi_B^{N-N_A} \phi_A^{N_A}) ((N-N_A)\ln{\phi_B }+N_A\ln{\phi_A})

=lnΩNϕBlnϕBNϕAlnϕA=-\ln{\Omega} -N\phi_B \ln{\phi_B}-N\phi_A \ln{\phi_A}

where we used the properties of Binomial distributions and that ϕB+ϕA=1\phi_B+\phi_A=1, as there are no other types of particles. Ignoring constants, the entropy per particle is:

ϕBlnϕBϕAlnϕA-\phi_B \ln{\phi_B}-\phi_A \ln{\phi_A}

We can write the energy per particle too. We define energies for AA, BB, and AB pairs. We assume, following our mean field approximation that there are a number of A neighbours equal to the expected number of neighbours given by the above scheme, i.e. zϕAz\phi_A, and similarly for B, where zz is the expected number of neighbours, not caring about label. After some algebra this gives a free energy:

FmixkT=ϕAlnϕA+ϕBlnϕB+χϕAϕB\frac{F_{\text{mix}}}{kT}=\phi_A\ln{\phi_A} +\phi_B\ln{\phi_B}+\chi\phi_A\phi_B.

where χ\chi depends on the strength of the interaction energies relative to kTkT. This curve has one minimum for high T and two minima for χ2\chi\geq 2 (where we consider FF as a function of ϕA\phi_A say). When there's one minimum, the system will in general not reach it because ϕA\phi_A is fixed, and it can be seen geometrically (see soft matter Jones book) that when the curve has positive curvature, then any phase separation will be unfavorable. However, when the two minima appear, it is favorable.

Phase separation refers to a system where there are different spatial regions in the volume of the system with different values for the order parameter, in this case related to ϕA\phi_A.

Fig. 1

The curve corresponding to the most favourable concentrations that will coexist in the different regions for the phase separated mixture is called the coexistence curve, or the binodal.

These most favourable concentrations are the ones that when a line is drawn through their corresponding values of F in the curve, the intersection with the line ϕ=ϕ0\phi=\phi_0, the initial concentration, is lowest. See Fig 1.a. This (if there are no degeneracies) can be found by the double-tangent construction: by finding a straight line that is tangent to the curve at two points. This condition is derived as follows:

Analyzing the free energy curve and realizing that the separation process is continuous (not a sudden jump), one realizes that depending on where the initial concentration begins, the separation is locally stable or locally unstable (i.e. metastable). This depends on the curvature of the curve as seen in figure 2. As usual the metastable will have a time-scale for overcoming the barrier (exponentially dependent on hight barrier. c.f. Kramers rate theory)

Fig. 2

The curve that separates these two regimes, i.e. where d2F/dϕ2=0d^2F/d\phi^2=0, is called the spinodal.

A good point to remember is that χ\chi, in the simplest case depends on temperature as 1/T1/T, but often the energies of interaction we used in it have entropic contributions, so the temperature dependence is more complicated.

Thermophoresis

guillefix 9th June 2016 at 4:47pm

this is a test now

guillefix 17th January 2016 at 3:12pm

this is a test tiddler

23rd July 2016 at 10:47pm

Thunder

guillefix 5th May 2016 at 10:37pm

tilting_ratchet1.png

guillefix 21st January 2016 at 5:37pm

tilting_ratchet2.png

guillefix 21st January 2016 at 5:42pm

tilting_ratchet3.png

guillefix 21st January 2016 at 6:02pm

tilting_ratchet4.jpg

guillefix 21st January 2016 at 6:07pm

tilting_ratchet5.jpg

guillefix 21st January 2016 at 6:08pm

tilting_ratchet6.png

guillefix 21st January 2016 at 6:12pm

tilting_ratchet7.jpg

guillefix 21st January 2016 at 6:12pm

Tool

guillefix 1st July 2016 at 11:21pm

top lel

guillefix 9th February 2016 at 12:39pm

Lel wut?

Topography

guillefix 11th June 2016 at 3:20pm

Topography is the study of the shape and features of the surface of the Earth and other observable astronomical objects including planets, moons, and asteroids.

See also Application of percolation models in topography

Topological dynamical system

guillefix 7th July 2016 at 7:59pm

A Topological dynamical system consists of a Topological space (e.g., a Metric space) MM, and a continuous map f:MMf:M \rightarrow M.

Topological dynamics

guillefix 5th July 2016 at 9:28pm

Topological entropy

guillefix 12th July 2016 at 12:45am

Topological entropy

Topological entropy video

In dynamical systems, complexity is usually measured by the topological entropy and reflects roughly speaking, the proliferation of periodic orbits with ever longer periods or the number of orbits that can be distinguished with increasing precision.

See the related Kolmogorov-Sinai entropy

Hans Henrik RUGH - The Milnor-Thurston determinant and the Ruelle transfer operator

Descriptional complexity

For a coarse-grained Dynamical system, described by a transition graph, in turn described by an Adjacency matrix AA, then the topological entropy hh is

h=logλmaxh = \log{\lambda_{\text{max}}}

where λmax\lambda_{\text{max}} is the maximum eigenvalue of AA (assumed to be a positive matrix so that Perron-Frobenius applies).

Determinant of a graph

Topological quantum field theory

guillefix 23rd June 2016 at 3:08pm

Topological space

guillefix 14th July 2016 at 3:52am

A topological space is a Set XX, with a collection of distinguished Subsets called Open sets, called the topology of the set. These must satisfy:

  • the union of an arbitrary collection of open sets is open;
  • the intersection of any finite collection of open sets is open;
  • The empty set \emptyset and the whole set, XX itself are both open.

An equivalent definition is that a topological space is a Neighbourhood space (X,N)(X, \mathcal{N}) in which, for all xXx \in X and for all NN(x)N \in \mathcal{N}(x), there exists N1N(x)N_1 \in \mathcal{N}(x) such that, for all yN1,NN(y)y \in N_1, N \in \mathcal{N}(y).

It can also be shown that: A neighbourhood space (X,N)(X, \mathcal{N}) is a topological space if and only if each Filter N(x)\mathcal{N}(x) has a Filter base consisting of Open sets.

Remark: for family CC of subsets of a set XX, there exists a unique 'smallest' topology on XX for which CC is a subbase: namely that topology whose open sets are defined to be all arbitrary unions of the collection of all finite intersections of elements of CC.

Connections with lattices

The set of open sets in a topology forms a lattice, where the partial ordering is set inclusion. Also the set of topologies on a set XX can also be equipped with a natural lattice structure.

Analytical properties

In a topological space one can define fundamental notions of:

These are approached using neighbourhoods of a point, which are just open sets that contain that point. The family of neighbourhoods

Examples of topologies

Product topology

Discrete topology

Related spaces

Metric space

Compact space

Hausdorff space

Topological trace formula

guillefix 6th July 2016 at 12:03am

The topological trace formula is a Trace formula for Topological dynamics.

αzλα1zλα=pnptp1tp\sum_{\alpha} \frac{z \lambda_\alpha}{1-z\lambda_\alpha}=\sum_p \frac{n_p t_p}{1-t_p}

See here and here. Also here. Here tp=npt_p = n_p if the prime cycle pp exists (npn_p being its length, and is 00 otherwise.

http://www.chaosbook.org/course1/Course2w9.html

This formula has uses for deriving a formula for the Topological entropy

Topological zeta function

guillefix 6th July 2016 at 12:04am

Topology

guillefix 29th March 2016 at 4:28pm

Things often have a shape (continued from Geometry). Topology looks at the "raw" shape, the shape that is invariant under continuous transformations. The standard notion of "shape" is really Geometry (which includes topology)

Topos theory

guillefix 24th June 2016 at 1:33am

Topos is a category that behaves like the category of sheaves of sets on a topological space (or more generally: on a site). Topoi behave much like the category of sets and possess a notion of localization; they are in a sense a generalization of point-set topology.[1] The Grothendieck topoi find applications in Algebraic geometry; the more general elementary topoi are used in logic.

A topos is a category with:

A) finite limits and colimits,

B) exponentials,

C) a subobject classifier.

Topos Theory in a Nutshell

Higher topos theory

Torch (Deep learning framework)

guillefix 3rd April 2016 at 3:45pm

Total ordering

guillefix 14th July 2016 at 1:07am

A total ordering is a binary Relation in a set XX, defined as a Partial ordering, \preceq, such that for any x,yXx, y \in X either xyx \preceq y or yxy \preceq x. The set is then said to be totally ordered.

Trace formula

guillefix 5th July 2016 at 10:47pm

A trace formula relates the spectrum of eigenvalues of an operator - for instance, the transition matrix - to the spectrum of periodic orbits of a dynamical system.

See here. and Topological trace formula

Transhumanism

guillefix 11th June 2016 at 5:19pm

Transition graph

guillefix 5th July 2016 at 5:42pm

Transitivity

guillefix 14th July 2016 at 12:58am

Transitivity refers to a property of a binary Relation, RR on XX:

for all x,y,zXx, y, z \in X, if xRyx R y and yRzy R z, then xRz x R z

Transitivity (Graph theory)

guillefix 30th January 2016 at 2:09pm

Transitivity|Transitivity (Graph theory) (a property of mathematical relations) in a network is usually applied to the relation "is connected by an edge". So a network is transitive if for every u connected to v and v connected to w, then u is connected to w. It's not hard to show that a perfectly transitive network can only have components that are fully connected, or cliques.

To be useful for real networks, we talk about partial transitivity, or the level of transitivity in a network. A way to quantify this is by measuring the number of paths of length-2 that are closed (closed here meaning that there is an edge that connects the beginning and ending vertices) compared to the total number of length-2 paths. This is because three vertices in a path of length-2 (a.k.a connected triple) would form a triangle (also known as closed triad) if transitivity holds for them.

One can then define the clustering coefficient, CC, to be the ratio of these two quantities, as a measure of "how often" transitivity holds in the network:

C=number of closed paths of length 2number of paths of length 2=(number of triangles)×6number of paths of length 2=(number of triangles)×3number of connected triplesC=\frac{\text{number of closed paths of length 2}}{\text{number of paths of length 2}}=\frac{\text{(number of triangles)} \times 6}{\text{number of paths of length 2}}=\frac{\text{(number of triangles)}\times 3}{\text{number of connected triples}}

where the 6 and the 3 come from counting the number of length-2 paths starting at the three different vertices of the triangle, where we count the two different directions (6) or not (3). This factor is cancelled by the fact that by definition there are twice as many length-2 paths as connected triples because connected triples don't take direction into account, while length-2 paths do. This last definition is the most common, and can be interpreted as the number of people with a common friend (connected triple) that are also friends (so that they form a triangle).

Another way to define a clustering coefficient would be to average the local clustering coefficient over all nodes. This quantity is defined, for node ii as:

Ci=pairs of neighbours of i that are adjacent to each othernumber of (pairs of (neighbours of i))=2τiki(ki1)C_i=\frac{\text{pairs of neighbours of i that are adjacent to each other}}{\text{number of (pairs of (neighbours of i))}}=\frac{2\tau_i}{k_i(k_i-1)}

which is defined when the degree ki2k_i\geq 2. For smaller degree, we can define Ci=0C_i=0. The average over this (over nodes in the network), CWSC_{WS}, then defines also a global measure of transitivity, and was proposed by Watts and Strogatz. It often tends to be dominated by networks with low degree, as the denominator of CiC_i is large.

Furthermore, one can extend the definition of the clustering coefficient beyond simple transitivity, to include the probability that friends of friends of friends are also your friends, and so on. This is equivalent to consider quadrilaterals, pentagons, and other more general motifs. apart from triangles. Triangles are often interesting because they are the smallest loops for undirected simple graphs. However, for directed simple graphs, the smallest ones are length-2 loops, and their frequency gives a measure called reciprocity.

For social networks, typical values are C=0.10.5C=0.1-0.5, which is quite high compared to most non-social networks.

Local clustering coefficients can be used to find structural holes. That is places in the network where we would expect a link to exist, due to transitivity, but there isn't one. Structural holes are bad for information flow (or other flows) in a network because they limit the paths it can take. However, they are usually good for the node that has low local clustering coefficient because it means that that node has more control over the flow, as most of its neighbours will have to direct their flow through it. Thus local clustering coefficient is sometimes used as a centrality measure in this sense, where a more central node has a lower CiC_i .

Another way to find structural holes is via the redundancy of a node, RiR_i, defined as the mean (that is, average over neighbours of i) number of [[neighbours of i] that a neighbour of i is connected to]. This can be shown to be related to CiC_i by:

Ci=Riki1C_i=\frac{R_i}{k_i-1}.

Transport

guillefix 7th May 2016 at 1:30am

Transportation technology & engineering

See Transportation innovation

Transport innovation

guillefix 24th June 2016 at 2:02am

Autonomous ground vehicles

Self-driving cars, google, tesla

Smart vehicles - IoT

Hyperloop

http://www.techinsider.io/images-of-the-hyperloop-technologys-test-track-2016-3

Electric cars

Tesla Motors

Faraday Futures

NextEV

BYD

Electric plane

https://en.wikipedia.org/wiki/Electric_aircraft

E-thrust concept from Airbus

Distribution and supply-chain

http://www.tandfonline.com/doi/abs/10.1080/00207540500142274

Unmanned aerial vehicles

Drones

Ionocraft

https://www.sciencedaily.com/releases/2013/04/130403122013.htm

On the performance of electrohydrodynamic propulsion citing papers

Electrohydrodynamic thrust density using positive corona-induced ionic winds for in-atmosphere propulsion We conclude that EHD propulsion has the potential to be viable from both an energy efficiency perspective (our previous study) and a thrust density perspective (this paper), with the greatest likelihood of viability for smaller aircraft such as unmanned aerial vehicles.

On the Thrust of a Single Electrode Electrohydrodynamic Thruster :O

Performance characterization of electrohydrodynamic propulsion devices

Noone talks about power storage and supply problem?

“The voltages could get enormous,” Barrett says. “But I think that’s a challenge that’s probably solvable.” For example, he says power might be supplied by lightweight solar panels or fuel cells. Barrett says ionic thrusters might also prove useful in quieter cooling systems for laptops.

http://www.scielo.br/scielo.php?pid=S1806-11172015000300307&script=sci_arttext

A Review of Future Propulsion Technologies

Passenger drone

World's first passenger drone cleared for testing in Nevada


Esoteric ideas

Lightcraft..

Tree (combinatorial structure)

guillefix 28th June 2016 at 5:20pm

A tree is a combinatorial structure recursively defined to be {a node and a sequence of trees}.

See Symbolic method for unlabelled structures

See also the particular kind: Tree (Graph theory)

Types

  • Rooted/Unrooted. Depending on whether a node is distinguished as the "root".
  • n-ary, for instance binary. If each node has nn children.
  • labelled/unlabelled
  • ordered/unordered. An ordered rooted tree is called planted..

Graph-Theoretic Concepts in Computer Science: 29th International ..., Volume 29

Tree (Graph theory)

guillefix 28th June 2016 at 5:14pm

A tree, in graph theory, is a connected, undirected graph that contains no closed loops. A forest is a disconnected graph whose connected parts are trees.

A tree in graph theory is a particular kind of Tree (combinatorial structure).

Trees are often drawn in a "rooted" manner. However, topologically, no node is distinguished as a root, and we could choose any node to be the root in this representation.

Properties

  • There is exactly one path between every two points. If there were two one could use them to make a loop.
  • A tree of nn vertices always exactly n1n-1 edges (seen by constructing the tree in rooted form).
  • A connected network with nn vertices with the minimum number of edges is always a tree (See Newman pages 128-129 for proofs).

Applications

Computer Science

  • Data structures
    • AVL trees
    • Heaps
  • Minimum spanning trees
  • Cayley trees
  • Bethe latices
  • Hierarchical models of networks

Network theory

  • small components in the network of a Random Graph are trees
  • Dendograms, a hierarchical decomposition of a network as a tree.

Physics

  • Feynman diagrams

Tree of life

guillefix 8th July 2016 at 6:38pm

Trellis diagram

guillefix 3rd July 2016 at 5:09am

Triangle inequality

guillefix 14th July 2016 at 12:41am

An inequality of by a Function of two variables:

d(x,y)d(x,z)+d(z,y)d(x,y) \leq d(x,z) + d(z,y)

It is a necessary condition for a Metric

Trigonometry

guillefix 1st June 2016 at 7:09pm

Tuple

guillefix 7th July 2016 at 6:48pm

An ordered collection of objects.

Tuples are found, for instance, as elements of a Cartesian product

Turmite

guillefix 13th July 2016 at 9:21pm

In computer science, a turmite is a Turing machine which has an orientation as well as a current state and a "tape" that consists of an infinite two-dimensional grid of cells.


https://en.wikipedia.org/wiki/Turmite

Types of contagions

guillefix 2nd June 2016 at 1:53am

For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses)

For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics

See also wiki page: Complex contagion

Types of percolation models

guillefix 16th June 2016 at 12:26am

Types of models used in the study of Percolation, and Percolation theory

Site percolation

Remove nodes (each with a given probability; or a fixed fraction. These are the same in the limit of infinte NN). Can have

  • "Random attack". Remove random nodes.
  • "Targeted attack". Remove nodes preferentially by degree, or by other metric. For an application of targeted attacks to the problem of influence maximization see Influence maximization in complex networks.

Bond percolation

Remove edges

K-core percolation

Prunning process for obtaining K-core of a network: one removes all nodes with fewer than K neighbours, and repeats this process

Explosive percolation

Percolation processes that show a discontinuous, or at least very steep phase transition.

  • Achlioptas process
  • Half-restricted process
  • Spanning cluster-avoiding (SCA) process

Bootstrap percolation

http://research.microsoft.com/en-us/um/people/holroyd/boot/

An "infection" process in which nodes become infected if sufficiently many of their neighbors are infected. Related to the Centola-Many threshold model for social contagions.

Limited path percolation

One construes "connectivity" as implying that a sufficiently short path still exists after some network components have been removed. To appreciate this idea, imagine trying to navigate a city in which some streets are blocked.

K-clique percolation

Percolation of K-cliques (completely connected subgraphs of K nodes) has been used to study the algorithmic detection of dense sets of nodes known as "communities" (see Uncovering the overlapping community structure of complex networks in nature and society pdf).

Percolation in Multilayer networks

Non-self-averaging percolation process

A type of process that is non-self-averaging, in the sense that the relative variance of the size of the largest component doesn't vanish in the thermodynamic limit.

  • Fractional percolation

Correlated percolation

Directed percolation

Percolation on a directed Network.

Other

types_of_random_processes.png

guillefix 21st January 2016 at 2:23am

Ultrafiltration

guillefix 2nd July 2016 at 3:54am

Unconventional computing

guillefix 29th June 2016 at 6:31pm

Unit testing

guillefix 4th February 2016 at 1:20am

https://qunitjs.com/intro/

Test-driven development

Universal algebra

guillefix 29th March 2016 at 3:14pm

Universal Chart

guillefix 5th July 2016 at 3:23am

The map of the whole Universe

Unsupervised learning

guillefix 12th July 2016 at 12:35am

Upper set

guillefix 14th July 2016 at 2:14am

In any lattice, LL, a subset UU of LL is said to be an upper set if aUa \in U implies that bUb \in U for all bLb \in L satisfying aba \preceq b, where \preceq refers to the Partial ordering defining the lattice.

Useful TW resources

guillefix 18th January 2016 at 2:59pm

Variable-length code

guillefix 4th July 2016 at 11:51pm

aka symbol code

In a variable-length code, one assigns a codeword to each letter in an alphabet. Formally, a variable-length code is a function C:XAC: \mathcal{X} \rightarrow A^*, where X\mathcal{X} is the source alphabet, and AA is the code alphabet, and \cdot^* is the Kleene star.

The extension of CC is the natural expension of CC to X\mathcal{X} ^*

The codewords are all the elements of the codomain of CC.

CC is uniquely decodable if CC^* is one-to-one.

Prefix code


(IC 2.2) Symbol codes - terminology and notation

An example is Morse code

Non-commutative Symbolic Coding

Vector calculus

guillefix 4th May 2016 at 7:56pm

Vehicle

guillefix 5th July 2016 at 4:12am

Videocamera

guillefix 25th June 2016 at 4:14am

Virgo Supercluster

guillefix 5th July 2016 at 3:26am

The Virgo Supercluster (Virgo SC) or the Local Supercluster (LSC or LS), one of the millions of superclusters, is a mass concentration of galaxies that contains the Virgo Cluster in addition to the Local Group, which in turn contains the Milky Way and Andromeda Galaxy.

A 2014 study indicates that the Virgo Supercluster is only a lobe of a greater supercluster, Laniakea, which is centered on the Great Attractor.

Virtual memory

guillefix 30th June 2016 at 1:22am

A representation of a fraction of physical memory in a computer, which is created by the Operating system for Memory allocation

What is virtual memory, how is it implemented, and why do operating systems use it?

Virtual reality

guillefix 3rd July 2016 at 5:16am

viscoelastic.png

guillefix 7th February 2016 at 5:55pm

Viscoelasticity

guillefix 11th May 2016 at 2:10pm

viscoelasticity is the property of a material that displays both viscosity, and elasticity. Such materials are called viscoelastic.

See Viscosity and elasticity

Viscosity and elasticity

guillefix 7th February 2016 at 10:12pm

The response of matter to a shear stress

Hookean solid: Shear strain proportional to shear stress. The proportionality constant is 1/G1/G, the shear modulus.

Newtonian fluid: Rate of shear strain proportional to shear stress. The proportionality constant is 1/η1/\eta, the viscosity.

Viscoelastic materials: Different responses at different time-scales. Often: elastic response with fixed strain when stress is first applied, but after a relaxation time, τ\tau, the fluid becomes viscous and the strain then increases linearly.

Fig 1.

Shear-thinning fluid: Viscosity decreases with shear rate.

Shear-thickening fluid: Viscosity increases with shear rate.

The latter three behaviours can often be associated with the fluid being a dispersion of colloidal particles.

In reality, all fluids are slightly viscoelastic, but the relaxation times are very small indeed. When you apply a stress to a fluid, its energy instantaneously increases because you are pushing atoms together. This exerts back a force that sustains the stress momentarily. The difference between a fluid and a solid, is that the fluid can very quickly rearrange the atoms to a state of lower stress (without needing to break many expensive bonds due to the crystalline order). The key for the fluid to have an instantaneous shear modulus though, is that the timescale for the opposing force from compressing the atoms together to emerge is still less than the relaxation time, I think.

A way to estimate this relaxation time for the fluid is by considering the atoms that get trapped in "cages" by neighbouring atoms

This atom is a higher energy (and lower entropy) state and to relax needs to overcome the potential barrier due to its neighbouring atoms. Due to the stochastic nature of this process, the relaxation time will follow an Arrhenius behavior with τνexp(ϵkT)\tau \sim \nu \exp(-\frac{\epsilon}{kT}) (where ν\nu is the "frequency" of attempts to scape). Plugging in measured or estimated values, this gives 101210^{-12}101010^{-10}s, which explains why the fluid appears viscous in the timescales of most experiments. By looking at Fig. 1, we can estimate the viscosity of a fluid to be G0τG_0 \tau, which thus depends rather strongly on temperature. This turns out to be the basis for the liquid to glassy transition.

However, as the temperature approaches the glass transition temperature, the temperature dependence of the relaxation time (and thus viscosity) changes. The viscosity in fact is found to appear to diverge at a finite temperature, as described by the Vogel-Fulcher law. As the relaxation time becomes large enough the system falls out of equilibrium with respect to experimental time scales, and the liquid forms a glass. The transition to a glass is however not a (thermodynamic) phase transition. It depends on the rate at which we lower the temperature, and it is in fact a kinetic transition (see Soft matter Jones book secion 2.4). The situation here is sometimes called broken ergodicity (I think: isn't this similar to what happens in phase transitions with spontaneously broken symmetries?

While there is no full theory of glass formation yet, a few have been proposed. An early approach is the free volume theory but its assumptions are questionable and sometimes predictions don't agree with experiment. More modern theories use the idea of cooperativity: as the temperature is lowered, the density is lowered too, and the molecules get more "cramped" together. Then, for a molecule to move its neighbours must move in a certain cooperative fashion. See work by Adam and GIbbs.

Elasticity in solids

Apart from the shear modulus described above for Hookean solids, there are also:

  • Young's modulus (EE), ratio of stress to strain for tensile stress.
  • Bulk modulus (KK). Ratio of stress over fractional volume change for uniform stress from all directions (isotropic).

A simple calculation (see Soft matter Jones book page 13) shows that for a Hookean solid (atoms connected by Hookean springs), Young modulus is k/ak/a, where kk is spring constant per spring, and aa is equilibrium interatomic separation. By considering a real potential expanded around its minimum (and considering the typical shape of this potential, like a Lenard-Jones potential), we can see that this is on the order ϵ/a3\epsilon/a^3, where ϵ\epsilon is the energy of the interatomic potential minimum, i.e. the bond energy.

This means that a material with a high density of strong bonds is still, while a material with a low density of weak bonds is floppy (soft).

It is important to note that real solids are in fact observed to exhibit a kind of viscosity. If the stress is applied long enough, a solid with impurities, dislocations, etc. can creep when these dislocations move around (as they only involve the breaking of a few bonds, they are much more likely than a perfect crystal's strain incresing). See Principles of CMP book, also remember how stable the square lattices of bucky balls were?

Visual arts

guillefix 22nd May 2016 at 3:25pm

Visualization tools

guillefix 20th July 2016 at 1:43pm

Voting theory

guillefix 8th April 2016 at 6:15pm

VR/AR innovation

guillefix 1st June 2016 at 7:20pm

HoloLens

http://mixed.one/#p5

HTC VIVE, STAR VR, Oculus Rift, Sony VR, Samsung Gear... The Void. Magic Leap, etc.

Game Room: Blockchain Meets Virtual Reality See Blockchain.

War

guillefix 28th June 2016 at 4:34pm

Warp drive

guillefix 7th May 2016 at 5:07pm

Warp drives do time travel. Geometrical matter of starting and ending points, independent of how one travels. Still, we don't know if they are actually possible, though they are rather unphysical on many respects

Water

guillefix 8th July 2016 at 12:00am

Water resource management

guillefix 1st July 2016 at 11:09pm

water_melons_cmp.png

guillefix 15th February 2016 at 11:01pm

Weapon

guillefix 19th April 2016 at 10:57pm

Bladed weapons

Sword

Weather

guillefix 5th May 2016 at 10:37pm

Web development

guillefix 30th June 2016 at 1:07am

Welcome

guillefix 22nd June 2016 at 4:51pm

Welcome to this digital dynamic representation of the Cosmos, as experienced by me.

Contents

Wilhelm Schickard

guillefix 4th May 2016 at 2:09am

http://people.idsia.ch/~juergen/schickard.html

https://en.wikipedia.org/wiki/Wilhelm_Schickard

(1592 - 1635)

Father of the computer age

"Computer history starts in 1623, when Wilhelm Schickard built mankind's first automatic calculator. Schickard's machine could perform basic arithmetic operations on integer inputs. His letters to Kepler, discoverer of the laws of planetary motion, explain the application of his "calculating clock" to the computation of astronomical tables.

The non- programmable Schickard machine was based on the traditional decimal system. Leibniz subsequently discovered the more convenient binary system (1679), an essential ingredient of the world's first working program- controlled computer, due to Zuse (1941)."

Wireless power transmission

guillefix 1st July 2016 at 6:54pm

Wireless telecommunication

guillefix 1st July 2016 at 6:54pm

Wifi,

3G/4G

WKB method

guillefix 8th June 2016 at 1:00am

For linear differential equations of any order, with non-constant coefficients (in general). See here and here.

As shown in the example in the notes, multiple scales fails when the frequency of the fast oscillation depends on the slow scale.

Then, one has to instead use the WKB ansatz:

y=eiϕ(x)/ϵA(x;ϵ)y=e^{i\phi(x)/\epsilon}A(x;\epsilon)

in the dispersive case, or

y=eϕ(x)/ϵA(x;ϵ)y=e^{\phi(x)/\epsilon}A(x;\epsilon)

in the dissipative case.

When substituting this in an equation, given in a certain form (a form in which all second order ODEs can be expressed, see first lectures by Bender), one gets a series of equations for the term of increasing order in ϵ\epsilon

  • Eikonal equation
  • Transport equation

Turning points

Use Matched asymptotic expansions. In the turning point itself, the leading order solution is an Airy function

Wright-Fisher model

guillefix 26th April 2016 at 3:32am

There are many variants.

See Population genetics

Assumptions:

  • Non-overlapping generations
  • ...

Haploid Wright-Fisher model with selection

The definition can be found here:

Definition (Haploid Wright-Fisher model with selection): In a panmictic, haploid population of constant size NN, where individuals are of type aa and AA: if generation at time tt consists of kk individuals of type aa, and NkN-k of type AA, then, according to the Wright-Fisher model with selection, the generation at time t+1t+1 is formed by NN individuals, each of which has a probability to be of type aa given by:

P(type a)=k(1+s)k(1+s)+NkP(\text{type a}) = \frac{k(1+s)}{k(1+s)+N-k}

and is of type AA otherwise. The process is called sampling with replacement, because we are, in effect, replacing each individual of the previous population by a new one, which follows a given distribution of alleles (type). ss is called the selection coefficient, and 1+s1+s is the fitness of type aa. If, we give a fitness 1+s1+s' to type AA, then we use

P(type a)=k(1+s)k(1+s)+(Nk)(1+s)P(\text{type a}) = \frac{k(1+s)}{k(1+s)+(N-k)(1+s')}

And one can see how this would be generalized for more possible types in the model.

The way this probability comes about is:

  • The denominator is just to normalize the probability
  • In the numerator, kk is the number of individuals of type aa. The factors (1+s)(1+s) and (1+s)(1+s') determine the relative average number of offspring per individual. By this I mean that average number of offspring of an type a individualaverage number of offspring of an type a individual=1+s1+s\frac{\text{average number of offspring of an type a individual}}{\text{average number of offspring of an type a individual}} = \frac{1+s}{1+s'}. The average number of offspring of type aa, for instance is P(type a)NP(\text{type a}) N, as it is for a Bernoulli distribution (in this case, for the number of aa-type individuals), or a multinomial distribution, if more than two types are being considered.

If s=0s=0 for all types, selection doesn't play a role, and the model describes genetic drift only.

Haploid Wright-Fisher model with selection and mutation

Also described here.

Starting from the same setup as above (for the Haploid Wright-Fisher model with selection), the definition for the model with mutation is:

Definition (Haploid Wright-Fisher model with 'selection and mutation): If there are kk individuals of type aa among parents (and NkN-k individuals of type AA), and we have mutation rates μ1\mu_1 for aAa \rightarrow A, and μ2\mu_2 for AaA \rightarrow a, then, the probability of type aa (also called the proportion of potential offspring, in frequentist language, used often in biology) is:

ψk=k(1+s)(1μ1)k(1+s)+Nk+(Nk)μ2k(1+s)+Nk\psi_k = \frac{k(1+s)(1-\mu_1)}{k(1+s)+N-k} + \frac{(N-k)\mu_2}{k(1+s)+N-k}

As above, as each of the individuals in the next generation (offspring) have a type independently following this distribution. The number of type aa offspring follows a binomial distribution Bin(N,ψk)Bin(N, \psi_k)

Fixation

Diffusion approximation

See page 326 in here for instance


See this question

Writing

guillefix 1st July 2016 at 11:11pm

Writing system

guillefix 1st July 2016 at 11:11pm

Writing tool

guillefix 1st July 2016 at 11:23pm

Handwriting tools

Paper, pencil, pen

Automated writing tools

Printing press

Written language

guillefix 3rd July 2016 at 2:46pm

Zoology

guillefix 5th July 2016 at 3:53am

Branch of biology that studies animals

See Tree of life

Study of animal behaviour: Ethology

See Animal locomotion

Vertebrates

Invertebrates

Insects

Beetle (insect)